Thanks! I'm considering introducing it to a project. It uses JS and Vue, but I think it can still be a match. A lot of props get passed around through many components, and this could be a way to just pass the whole object but still make sure all the fields are present.
English posts
@christoomey Hi Chris! This talk is not that old, but I was curious: are you still using this pattern of exporting GraphQL fragments from React components? Are you still happy with it and would you still recommend this approach? :)
https://www.youtube.com/watch?v=Qsoj4s_Ml6s
I just released a completely rewritten version of Gimme A Token, which hopefully does a better job of explaining the IndieAuth flow as you go.
I don’t support checking in to vehicles yet, but it would be of type airplane
, number PH-BGQ and as a URL: https://skyteamvirtual.org/fleet/aircraft/PH-BGQ
Surf the web at lightning speed
I am at an airport, and I just saw a booth where I could access the web for free if I had the right card, or for a certain amount of euros per time unit. I saw the Microsoft Internet Explorer logo. All the computers where available, except for one. I assumed the girl that was using it worked there. Why else would you use a place like that?
It made me realise how much the web has changed. In another era, there were publishers, who put information on paper. These papers could then be brought by those who were interested in the information. It was a publishers world. The story has been told many times: the internet changed that, information became free, anyone could publish. But the web has evolved once more.
Why would I not want to use a computer in that booth even if it were free? Because I have no idea what to do on the web. Do I go to some news sites? Do I look up the time of my flight again? It would only be to kill time. Most of the public information is boring, the real fun is on Twitter, Facebook or maybe even my e-mail, where the information is tailored to my tastes. And no, I would not feel okay with entering my password into one of those things.
I should probably not call the social media part of the internet ‘the web’. It is partly not accessable without logging in, not all parts have clear URLs, it depends on a few giant websites. But there’s more fun on those silo’s, or at least: there is more of that promised content, published by anyone, everyone, especially your friends. And the best way of viewing it, is by authenticating myself as me. Luckily this is not a problem: I have a device in my pocket that is connected and authenticated 24/7. Compared to that, the web is a dull place, with unpersonal information.
Will the web make place for these giant silo’s? Is the time of publishing on your own site over? I hope not. I’m experimenting with this on this website, which is my personal website. There is a link in the upper-right corner which says ‘log in’. That is not a link for me, that is a link for you. You can log in using various methods (ok, only IndieAuth and Twitter are supported at the time of writing). After you are logged in you will see more posts, like my checkin into the airport. If I know you and shared a post especially with you, you will see those as well.
We still need some work on staying authenticated, preferably to one app that collects private posts for you, as you. But there is more web in this approach than we have in social media. And that is nice. I like more web. The web is exciting. Let’s not let it stay boring with only public, general information. Let’s share the personal here too. And let’s create a way to do that in a more private way, where you control who sees your posts. See you on the IndieWeb!
I might have to POSSE manual for a while, for Twitter already blocked me once when following a few people at once, but: got a new Twitter account to talk and read about codes and web! #indieweb #manualuntilithurts
GraphQL Stitching versus Federation
Recently, I started working on a project that uses GraphQL Stitching, to bring multiple GraphQL Schema's from backend servers into one central Schema to consume for clients. Sadly, I cannot say I'm a fan of it. In researching how it all works, I found out that it was actually deprecated, and by digging a bit deeper, I found out that that happened last Thursday. I happily investigated the proposed alternative of Apollo Federation.
What's wrong with Stitching?
Let's first dive into what I don't like about Schema Stitching. I must admit that I'm no expert on the subject: I am still trying to grasp what's possible. And it seems like a lot is possible. One can grab fields from one service, enhance them with fields from other services, leave fields out, all kinds of things.
For example. If I have a backend service that does books, it could have a Schema like the following:
type Query {
books: [Book]
book(id: ID!): Book
}
type Book {
id: ID!
isbn: String!
title: String
}
If you then have a backend service that does reviews of books, it could have a Schema like the following:
type Query {
reviewsForIsbn(isbn: String!): [Review]
}
type Review {
text: String
stars: Integer
}
The thing you can do with Stitching, is that you can add a reviews
field to the Book
type, by using the isbn
field on Book
and using reviewsForIsbn(isbn:)
to grab it. And this is already the part where I should leave out the details of how to do it: I have no idea, but I know it's possible, and even in weirder setups than this one.
The nasty thing about it is that it creates a dependency on both the isbn
and reviewsForIsbn
fields. This dependency is not visible from either backend service, only in the code of the stitching service. And since GraphQL is all about callbacks and resolver functions, that code in the stitching service can become difficult to read, especially if you do a lot of stitching.
I think the term stitching is excelent: if you keep stitching Schemas together, you will get a giant stitchwork full of yarn and everything will be connected to everything. And that's a bad thing, because it ties you down: you can't move a thing, if you pull one cord out, the whole thing might become undone.
With great power comes great responsibility. I think stitching is very powerful, but it's also very easy to shoot yourself in the foot with. As said: I was happily surprised to find out I wasn't alone and that it has been deprecated by Apollo. (Who, by the way, are the only GraphQL library that supported it.)
The new fancyness: Apollo Federation
How to bind multiple GraphQL Schemas together if you can't stitch them? Apollo's new answer is Apollo Federation, which is a pattern of defining Schemas, and a service called Apollo Gateway. Short summary: the Gateway reads the federated Schemas, and based on the information they provide, it stitches it all together, without you having to write any code in the stitching layer.
This information is given in the form of type anotations and some callbacks, that are described in a specification. For Javascript, you can just use the @apollo/federation
package. For libraries (such as Absinthe for Elixir) you have some work to do.
But let's review the previous two services again, but now using Federation. Federation makes use of a few directives, which are designed to give more information to the query executor. They begin with an @
sign and can have parameters.
Consider the Books service again:
expand type Query {
books: [Book]
book(id: ID!): Book
}
type Book {
id: ID!
isbn: String!
title: String
}
Not much changed here, other than the expand
keyword in front of the Query
type. The Books service, in this simple example, does not need any external data. The Reviews service, however, does:
expand type Query {
reviewsForIsbn(isbn: String!): [Review]
}
type Review {
text: String
stars: Integer
}
expand type Book @key(fields: "isbn") {
reviews: [Review] @external
}
There is some stitching here. The Reviews service knows about Books, but that is fine: a review would not makes sense without an object to review, in this case, books. Notice how, in the Schema, it is defined that, in order for the Reviews service to resolve the Book
type, it needs an isbn
. Seen from the existing Book
type, the Review
is then an @external
type, which is provided by the Review service. Note that we could drop the reviewsForIsbn
entirely now, but still maintain reviews
on Book
.
Some questions I think I have answers for but am not sure about:
- Why add a
Book
type to the Reviews service? Because someone has to know, and I prefer it to be the service that has the other entity next to it's domain. - How does the Gateway know to call
reviewsForIsbn
? Well it doesn't: given the type and the requested@key(fields:)
, and some underwater endpoint, the Reviews service is now responsible to find reviews that match that type (Book) and key, and it can have a resolver for that. - Why can't the Books service just provide the full
Book
type? Because then knowledge about the link between Books and Reviews is in both services.
I don't pretend to understand all the implications of both approaches, but intu¬tively, this second approach seems better to me. There is no code that we have to write in the Gateway anymore, and the Schema is split up between the services, that only know their own part about the shared entities.
Downsides: gotta bring it to Absinthe
Unfortunately, like with Stitching, Apollo for Javascript is the only library that supports this thing at the moment. As mentioned, there is a spec describing what to do to support this elsewhere (still using the Apollo Gateway), but it is very early days.
I tried some stuff with Absinthe, and was happy to see macros for directives. After some exploration, however, I came to the conclusion that the directives you can define with these, are only useable in the Schema you put out, and thus in your queries. Since the Schema itself is defined with macros in a DSL, I cannot use the directives there. I will have to set up my own macros for it, and that is it's own rabbit hole.
Moreover: in the current release of Absinthe, there is not yet a way to get your parsed Schema out. This Schema, however, has to be send to the Gateway, for it to work. I tried working around it by defining the Schema twice (by hand and with the DSL), but there are still a lot of rabbit holes in the union _Entity
(which consists of all types that use the @key
directive) and scalar _Any
(which maps to all @external
types).
With Stitching, all the work is done in the external service, so your backend services do not have to know about it. With Federation, all your backend services have to be aware that they are part of something bigger in order to participate in that bigger picture.
Still, I really like the idea, and I think it will hold better than stitching all the things manually.
Private posts: the move of the checkins
Can I tell you a secret about writing software? We all just wing it. We all try to write the code as beautiful, readable and maintainable as we can, but in the end of the day, the business wants our projects to be done yesterday, not in three weeks. So despite best intentions, corners are cut and things that should not know about other things are calling each other. Some call this spaghetti.
I will also not lie to you: the codebase of this here weblog, at least in it’s current form, is not free of spaghetti or mess. Corners were cut in a time where I did not know there even were corners to begin with. I improved the code many times, all in different directions, because you’re always learning better ways to do it (and I still do). Some call this a legacy codebase.
Because of the shape the code is in, I did not want to add large features anymore. I wanted to rewrite it, of all of it. But as Martin Fowler said somewhere: the only thing you will get from a Big Bang Rewrite, is a big bang. It’s better to incrementally improve your application, so I tried. I tried to come up with clever strategies to do so, to keep parts of my site running on old code while the rest was fresh and new. In all those strategies, my blog entries would be last, because they are with 9000+ and need to be moved all at once.
In order to support private posts, however, it is precisely the code that serves my blog entries that needs work. This means that, while I have private posts very high on my wishlist, I postponed it to after The Rewrite. And I kept attempting to get there, but since it’s a big project for sparetime hours, private posts where impossible for a long time.
The year of the private posts?
Recently, the call for private posts became louder again. Aaron Parecki is trying to get a group of people together to exchange private posts between Readers. I would like to be one of them. In some regard I’m already ‘ahead’ of the game, because I do support private posts on my site already since 2017. The thing is: you need to know the URL of the post to actually read it.
I’ve attended both IndieWebCamp Düsseldorf and Utrecht last month. At the first one, we had a very good session about the UI side of private posts. The blogpost I wrote about it unfortunately stayed in draft. The summary: I used to denote private posts by adding the word ‘privé’ in bold below the post, next to the timestamp. Since the hackday I now show a sort-of header with a lock icon, and a text telling you that only you can see the post, or you and others, if that’s the case.
A big takeaway from Düsseldorf was that I don’t need to do it all at once. To me, the first step to private posts is letting people login to your site. This can be done with IndieAuth, or by using IndieAuth.com (which will move to IndieLogin.com at some point). The second step is to mark a post as private in your storage, and only serve it to people who are logged in. The third step is to add a list of people who can see the post, and only show it to those people. This is the place where I was at.
The fourth step should then be: show those private-for-all posts in your feed, for anyone logged in. The fifth step is to also show those private-for-you posts in their feed, which is tricker but not impossible. The sixth step would then finally be letting the user’s Reader log into your site on their behalf. I feel like I have seen that sixth step as the next step for way to long. By making it the sixth step, it is now only about authentication / authorization, not about what to show to who (because you got that already).
A bonus step could then be to add groups, so you can more easily share posts with certain groups of people. I have wrote about the queries involved before. This is a bonus step, because it’s making your life easier as maintainer of the site, but it is invisible to the outside world. (I would prefer not to share to people which groups they are in, nor the names of the groups the post was shared with. Those groups are purely for my own convenience.)
Of course, you can take different steps, in a different order. But to me, this is the path to where I want private posts to be.
Channeling my inner Business Stakeholder
After breaking it down into these nice steps, I’m still left with a legacy codebase. My biggest takeaway from Utrecht, was that I should be more pragmatic about it. The code quality of my site is only visible to me, what matters is the functionality. And I want this private post functionality.
I still did some refactoring that could be useful to future versions of this site, but I won’t bore you with that. I decided that it was not worth the wait, and that private post feeds should be part of this version of my blog.
Last Tuesday, there was yet another chat about private posts and how to do it. There was a question about the progress, whether or not something was decided at the recent EU-IWCs. But there is no decision, there is no permission, there is no plan to be carried out. There is just us, wanting to use this feature that does not exist yet. The only way to actually get there, is to build it ourselves and see what works and what doesn’t.
So I hacked it together, in my existing code. I believe I broke things, but I have fixed some. If you see more, please tell me. But I got the functionality, and that is what counts.
Marking all my checkins private
There is this app called Swarm. Some members of the IndieWeb Community use it, because it’s fun. I would call it the Guilty Pleasure of the IndieWeb, the last Silo. I use it too, especially when I’m in a city for IndieWebCamp. It’s almost impossible not to use it then: the people I’m with are checking me in anyway.
I like having a log of every bar, restaurant, shop I have been, and I see value in sharing it. But it also creeps my out to have all that information about me on a public place like this. Even on Swarm, checkins are only shared with friends (and advertisers), not the public. It seems to me that my checkins, then, are the perfect place to start with private posts.
So that is what I made: I marked all my checkins as private-for-all. This means they are still public at the moment, but you need to log in, which currently rules out bots and practically every visior. But chances are you know how to use IndieAuth, or have a Twitter account. You can then login to my site by clicking the link in the upper right corner. After you logged in, you will see all my checkins appear in the main feed, each of them with a message that it’s only visible to logged in users.
In addition, there is a new page: /private. The link will appear in the menu when you are logged in. This page shows you all the private posts that are specificly shared with you. Some of you might actually see a post there.
Steps
One part of me says “but is private-for-all private enough for my checkins?” Another part of me says “it’s nice that you support the feature, but no-one is going to log into your site.” Yet another part of me says “what is it worth, writing more code in this codebase you want to get rid of anyway?” But it’s fine. I made a step, that’s what’s important. From here, I can look into AutoAuth, and maybe, maybe, we can get some private feed fetching to work before IndieWebSummit.
But in the worst case: I own my checkins, and I control who sees them. And that’s a very nice place to be in.
So of course your editor does this too, but I just ran ag -l Hidden: | xargs vim
and :argdo %s/^Hidden: true/Visibility: hidden/
and :argdo write
(and the same for ag -l Private: | xargs vim
and :argdo %s/^Private: true/Visibility: private/
) on my content/
folder and it feels great.
Vim and Git
So I woke up this morning and thought: I should really dig more into using Git within Vim with tpope's vim-fugitive. So I did.
I was already using the :Gblame
command it provides a lot. This command opens a vertical split, with on the right the current file and on the left for each line of that file the hash, author and date of the last time that line changed. This is great for quickly identifying the author and age of a piece of code, which helps me a great deal in understanding the purpose behind the code. It's sad that Git gives this feature a name with such a negative connotation. PHPStorm calls this 'Git annotate', which feels nicer. One can also do a git blame
from the command line or on Github, but having this command right in your editor is very useful.
I noticed, however, that I can browse the commits even further, by pressing enter when the cursor was on such a line in the righthand buffer. I would instantly be lost in weird screens about commits. I also figured there would be ways to commit from Vim (what a Git plugin would it be else), but I hadn't figured out how it worked.
I think the best tour of the plugin, apart from it's help files, is this nice overview of five video's on Vimcasts.org, about the basic commands, the difference between committed files, the working copy and the index, resolving merge conflicts and other diffs, using Vim to browse Git and specificly browsing commit history. Recommended when diving into this, I learned a lot about the inner workings of Git too :)
Switched my design from using a grey background with white backgrounds for the posts, to a white background for both, and a light fading box-shadow around the posts. I am very pleased with the results!
What did you do this weekend?
- I edited my vimrc-file
Typing about types of types
I am switching jobs and programming languages. I already follow a few Ruby oriented podcasts, but I was looking for some new ones. So in light of that search I just listened to an episode of Remote Ruby, the one with Adam Wathan. That seemed like an especially interesting one, because Adam is a Laravel PHP developer, which is what I am now.
Later on in the episode they of course talk about Tailwind, but they start with some very interesting things about PHP and the direction it's taking. Part of the things they say are things I heard said before, but a lot of the things are things I have thought myself and now heard back.
To summarize in my own words: PHP is moving more and more towards adding types and type annotations everywhere. They are not alone in this, types are hot: in the Javascript ecosystem Typescript is growing fast, and lots of other statically typed languages grow as well. But PHP's approach is a bit odd: there is no compile step with PHP, so all your errors are still runtime errors.
I really feel more at home in the Ruby world (and Laravel, as the home of the "PHP developers who want PHP to be like Ruby"), which is why switching to Ruby made sense. But it's also a bit scary to actually join the shrinking side of the world of programming. Ruby is so dynamic, I don't see it supporting type checking anytime soon.
On the other hand, I see the value of types. I think it's valid to say that computers are getting better and better at understanding programs, and types are an important part of how they do that. I like the idea of the friendly compiler and the promises of Elm. But should that mean my PHP-code has to be full of types everywhere? Only for more specific crashes on runtime?
It was nice to hear Adam talk about this other language called Crystal, which apparently does the complete opposite of PHP: compile with static types but with as minimal type declarations as possible. That made me realise even more that my problem might not be with types, but with these annotations everywhere.
While we're on the topic of types...
Also worth noting is Elixir's take on types. Elixir, like Erlang and Ruby, is dynamically typed, and due to the nature of the runtime it's actually very hard to create a type system for it. Some smart people tried it for Erlang and failed, but wrote an article about it and afterward created this tool called Dialyzer.
Dialyzer does not run on compilation, but more as a tool on the side, during development. It looks at your code (and, in Elixir, at the types you can optionally specify), and if it's certain that your code has an error, it will complain. This means it will not catch every bug there is to catch, which is nearly impossible in such a dynamic language (I have heard the article explains why), but it will still catch some and provide value that way.
To defend PHP a bit here: this looks a lot like how my type enthusiastic co-worker uses types in PHP. His Dyalizer is called PHPStorm, and like Dyalizer it runs analysis on the code during development. This also reduces the runtime errors in the same exact way.
The point where I get the creeps, however, is the place where the type is defined. In Elixir, it's an annotation you can optionally set on a function and in the compiled code remains no trace of this hint. In PHP, the type hints are baked into the language and cluttering the real code. And they are just not that good.
For example. In Elixir you can say "the thing this function returns is a Banana, please don't look at it". The function can then actually return a string, but once you access it as a string, Dialyzer will complain, because it's a Banana, you know. In PHP, there is only strings and integers, and if you want something for yourself you got to wrap it in an object. And before you know it, there's classes everywhere, that do not really add anything more than type information.
I am not the only PHP developer, and it seems like most PHP developers are happy with the direction that is taken. It's a bit sad that due to the way things are adopted, more and more libraries and thus applications are forced into using types. Although PHP says they are optional, because they are not optional when extending classes, type declarations leak into more and more PHP code.
It was also interesting hearing Adam talk about Ruby and especially calling out Elixir as one thing he would like to explore if PHP fails him. That was sort of my plan too, but I'm actually acting on it now. I am very curious how that will turn out.
Match on the end of a pipe
This is a private copy of my post on the ElixirForum
So I have looked for this topic and found similar ones but not actually this one. Forgive me if I missed it.
While I really love Elixirs pipelines, I see myself writing this pattern from time to time:
def some_function(start) do
result =
start
|> Enum.map(&do_something/1)
|> Enum.filter(&but_without_these/1)
{:ok, result}
end
For this pattern, people seem to have come up with operators that do things with second or last arguments (I found one of those discussions). I'm not really interested in that, I want to make another point.
In the above pattern, I start reading about a result
, then I see start
, and then the actions that lead to start. Your eyes notice the pipeline quite fast, you see what happens, but especially when the action after the pipeline is more complicated, you start asking yourself: wait, what did they call the result of this pipeline? To find out, you have to go all the way up to read the variable name.
Put in another way: this way of writing messes with the ordering in which things are happening. First, the start
is evaluated, then the pipeline is run, and only after that the match is completed which gives result
a value.
I think it would be nice to be able to write it like this:
def some_function(start) do
start
|> Enum.map(&do_something/1)
|> Enum.filter(&but_without_these/1)
>>> result
{:ok, result}
end
This removes some indentation and reads nicely in order of what happens when. It might even stop the questions for |2>
? You can just start a new pipeline after the first one with this variable where-ever you want.
Oh, and it's a pattern match, so it's quite powerful, as you know.
def some_function(start) do
start
|> Enum.map(&do_something/1)
|> Enum.filter(&but_without_these/1)
>>> [first | _]
Enum.zip(start, first)
|> Enum.reduce(&more_transforms/1)
>>> result
{:ok, result}
end
Note: I used >>>
here because it's one of the available custom operators, but I think =>
would be prettier (but probably taken) or <|
(but that's not available). Here's a very naive macro to make it work:
defmacro left >>> right do
{:=, [], [right, left]}
end
Again, if this has been proposed too many times, please pardon the intrusion :)
My site broke, because the /2019 folder in my storage did not yet exist, and somewhere over the last year I added code that relied on that. So 19 years in, the Millennium bug is still active.
Just implemented a Siri Shortcut for IndieAuth! Now let’s see if I can integrate it with my Micropub shortcut.
Enhancing the Micropub experience with services
At IndieWebCamp Berlin this year, at the session about Workflow, we came up with an idea, how to enhance your blogposts with an external service using Micropub. I’ve thought of a few variants, and in spirit of the IndieWeb I should first build them and then show it, but I haven’t got around it yet.
So y’all will have to do with just a description. I might implement it at some point, if I have a real use case for it. I don’t actually want weather on my posts.
But let’s start at an idea I first had at IndieWebCamp Nürnberg.
The Syndication Button Hack
Micropub is an open API standard that allows clients to post to servers. In the spec, there is a mechanism for clients to show buttons for syndication targets. The client asks the server what targets there are, and the server responds with a list of names and UIDs. The client then shows the names on buttons (or near checkboxes) and if the user selects one, the UID is set as the mp-syndicate-to
field of the post. The server is then responsible for syndicating the post to, say, Twitter or Facebook.
This mechanism is widely supported among clients. And since the client does not have to do any work actually related to the syndication, it can also be used for other things.
Imagine the server implementing private posts. The support for private posts in Micropub clients is not really existing at the moment of writing. But we can get a button to toggle the state of the post created, quite easily:
GET /micropub?q=syndicate-to
Authorization: Bearer xxxxxxxxx
Accept: application/json
HTTP/1.1 200 OK
Content-type: application/json
{
"syndicate-to": [
{
"uid": "private-post",
"name": "Private post"
}
]
}
Since it’s up to the server to syndicate to private-post
, it can decide not to syndicate it, but to mark it private. There are a number of possibilities with this: toggle audiences, mark the post as draft. All these things could have their own queries at some point, but until then, this will work in almost all the clients.
Also notice the Bearer
token. The server can know which client is asking, so it could show a different set of buttons, depending on the client. Quill supports draft posts? Don’t show that button in Quill.
Enter the Weather Service
Back to the idea of Berlin, which takes this one step further. If we have the Syndication Button Hack in place, we can also hook up external services to enhance our blog posts.
Say I display a location with every entry I post. I could have a button that says: ‘Weather Service’. Activating that button would instruct my server to ping the Weather Service about the existance of this new post. This could be done by WebSub or some other mechanism.
Back when I signed up for the Weather Service, I gave it access to my Micropub endpoint as well. The Weather Service waits for new posts to arrive, reads their location, fetches the weather for that location, and sends a Micropub update request.
The only new part this requires, is the button and the ping to the Weather Service. All the other parts exist in clients and servers. Ah, and someone will need to build that Weather Service.
External services in general
The nice thing about this model, is that the heavy lifting is on neither the Micropub client nor the server. It’s on the external service. And it’s not that heavy of a lifting, because the external service does only one thing and does one thing well. It can give superpowers to both Wordpress blogs and static generated sites.
The external service could provide information about the weather, but think of Aaron’s Overland and Compass: it could also provide the location of the post given a point in time. There might be more. Expanding venue info?
One thing to watch out for, is concurrent processing of these Micropub requests. This might not be a problem for you, but I store my posts as flat files. If two services send an update request for the same post, one might start, and the other might overwrite the first one. (I really need to check how my blog handles this case.)
When you are using a database like MySQL, you should be safe for this kind of stuff, but it still depends on the implementation of your Micropub endpoint.
Other ways of doing it
Peter did not like this first approach, because his post would have multiple visible states (first a few seconds without weather, then with it).
Another appreach would be a sort of Russian doll Micropub request, where you sign in to an external service which signs in to your Micropub endpoint. This would mean that quill.p3k.io
posts to weather.example/micropub
which intercepts the request, and sends the same request with weather info added to seblog.nl/micropub
.
I don’t like that approach either, because now I have to trust the Weather Service with my tokens. In the first approach, every service gets their own scoped token, which is safer.
Since the server knows how many services it has asked to enhance the post, it could also keep it in draft until the last update request comes in. This would require more work on the server’s side of things, and there has to be a timeout on it, but it could be a way to mitigate Peter’s problem.
As always: feel free to steal or improve, but please let me know.
PushAPI without Notifications
Recently I’ve been to IndieWebCamp Berlin, where I spend the Hack Day on abusing the PushAPI to update ServiceWorker caches.
I would like to start with a small section on what and why, but while I was procrastinating on writing this blog post (the pressure is high), no one less than Jeremy Keith wrote a blog post about it. Since that’s a perfect what and why, there are just two things to do for me here: demo and how.
Demo
I did a demo in Berlin, but the demo-gods where unforgiving. It did not work at all, but when I got back to my seat, it started working again. What happened? My Mac tried to be nice and turned off notifications while I was presenting.
But, as to make up for it, the new macOS Mojave shipped with a screen capturing tool. So here is a retry of the demo in under 5 minutes:
The how
This might not be the most interesting part of it, but it’s nice to share work. It’s not a full comprehensive guide on how to do this stuff, because that would just take way too long. See it as a quick guide behind the different API’s involved.
I googled it all anyway. You can google along.
Oh and if you want to skip ahead: there are some use cases at the end.
Showing a local notification
Like with any Javascript, you should check support before you ask something. There is a list of things to ask in the below code example: we want Notifications, it should not be denied, there has to be ServiceWorker support, and for the part later on, there should be a PushManager too.
Once we prompted the user and got permission, it’s as simple as getting our ServiceWorker registration and ask it to show a notification. As you can see: this involves the ServiceWorker, but it does not involve any other servers.
function activateNotifications() {
Notification.requestPermission()
.then(status => this.status = status)
},
function supportsNotifications() {
return ('Notification' in window) && (this.status !== 'denied') &&
('serviceWorker' in navigator) && ('PushManager' in window)
}
async function sendTestNotification() {
const reg = await navigator.serviceWorker.getRegistration()
return reg.showNotification('Hallo, test!')
}
Note: the demo code is using Vue, which I leave out in this blog post to simplify things. But that’s where this
points to: a collection of variables on the Vue instance.
Subscribing for the PushAPI
Once the user clicks the button ‘Subscribe’, the following function gets triggered. In here, we again get the ServiceWorker registration, and then access the PushManager on it, which we tell to subscribe.
async function subscribe() {
const reg = await navigator.serviceWorker.getRegistration()
const sub = await reg.pushManager.subscribe({
userVisibleOnly: true, // required for Chrome
applicationServerKey: urlB64ToUint8Array(this.publicVapidKey)
})
this.notifications = true
const key = sub.getKey('p256dh')
const token = sub.getKey('auth')
await this.$http.post('/subscriptions', {
endpoint: sub.endpoint,
key: key ? btoa(String.fromCharCode.apply(null, new Uint8Array(key))) : null,
token: token ? btoa(String.fromCharCode.apply(null, new Uint8Array(token))) : null
})
this.subscribed = sub !== null
},
Some browsers have their own way of doing authentication, but the most universal is with a Vapid key pair. The package I use for the backend came with a way of creating them. We give the public key to the PushManager, which will give us a Subscription
object.
In the end, we send the Subscription
’s key, token and endpoint to the server via a POST request.
Note: my HTTP library of choice is axios and the urlB64ToUint8Array()
function can be found here
Storing the Subscription
For the backend, I’m using a Laravel package for WebPush, which allows me to save the endpoint with very minimal code:
public function update(Request $request)
{
$this->validate($request, ['endpoint' => 'required']);
$request->user()->updatePushSubscription(
$request->endpoint,
$request->key,
$request->token
);
return response()->json(null, 201);
}
As you can see, it is using the user to associate the data with. (I fake the auth in the demo, which I do not recommend.) It ends up in a database, with four main columns: user_id
, endpoint
, public_key
, auth_token
.
In theory, you can go without users, but you will need to store the other parts. The token and key look like random strings, but the endpoint is an actual URL, on a subdomain of either Mozilla or Google, depending on the browser. (No support on Safari yet, mind you.)
These endpoints and tokens can expire, so you will need to keep an eye on the table.
Sending the notification
I can be short about this part: I have no idea. The following code is all it takes to trigger it:
Notification::send(
User::all(),
new NewBlogPostCreated($content, $notify)
);
... where $content
is the content of the post, and $notify
a boolean, telling my ServiceWorker whether or not to show a notification (we’ll get to that).
The NewBlogPostCreated
class extends Laravel’s build-in Notification
class and has these two methods:
public function via($notifiable)
{
return [WebPushChannel::class];
}
public function toWebPush($notifiable, $notification)
{
return (new WebPushMessage)
->title($this->notify ? 'notify' : 'update-cache')
->body($this->content);
}
There is a lot of magic behind the scenes here. I have no idea. In the end, they send a POST request to the endpoints of those users, after signing the right things with the right keys.
Receiving the notification and then don’t
Next, we’re back in Javascript-land, however, this is the ServiceWorker-province. The ServiceWorker, once installed, is a script, written in Javascript, but completely decoupled from any window. It lives in your browser and represents not one page, but your whole website.
It’s quite hard to wrap your head around at first, but, I think the PushAPI makes it easier: there is no window involved with a push message, and there is no page involved with a push message. There is only your ServiceWorker, which acts for your whole website.
The ServiceWorker script itself consists of a series of callbacks, that are executed whenever things happen. In the case of a push message, the 'push'
event is triggered:
(function() {
'use strict';
self.addEventListener('push', function (e) {
const data = e.data.json()
self.caches.open('manual')
.then(cache => cache.put('hello', new Response(data.body)))
if (data.title == 'notify') {
e.waitUntil(
self.registration.showNotification(
'New content!',
{body: data.body}
)
);
}
});
})();
That’s all I need for receiving push notifications. I first retrieve the data from the message. Then I open the cache named ‘manual’ and I put the body of the message in that cache as the content of a URL (in this case ‘offline.test/hello’). It is made for pages, but I use it as a key-value store here.
Then I check the title field, which I have abused for this purpose. If it is set to the magic string ‘notify’, I will trigger the notification. If it’s something else I will do nothing.
This shows that I don’t have to: I can leave the notification out, but I still get a ServiceWorker activation and I can do whatever I want with it.
Use cases
I think this can be used for creepy things (can I occasionally ping my ServiceWorkers and ask for data like ‘how many windows are open?’ and phone that home?), but I also think there are nice uses for this as well.
As Jeremy wrote: this can be used for magazines, podcasts and blogs to push new content to my phone, to read on a plane or in the subway when I’m offline. I see a nice feature for a web-based IndieWeb Reader too: it can push me copies of posts it collected.
I think the Reader is a nice place to use this. With great power comes great responsibility. Do I want to grand that great power to that weird magazine, that dubious podcast, that blog I visit once or twice a month? I might know you well, I might not. Do I trust you, pushing megabytes on my phone without me noticing?
Web apps like a Reader are easier to bond with. Plus: once I know my Reader supports reading offline, I might visit it in the subway. Will I remember the magazine?
The last bonus of the IndieWeb Reader specifically: it can send me posts from any magazine or podcast or blog, whether they support offline reading or not. But that’s more specific to the Reader than it is to Push.
I’m also very curious to know how things will evolve if ServiceWorkers get even more superpowers. How well will those pair with a free ServiceWorker activation? Lot’s of exploring to do!
Exploring queries for private feeds
One of the discussions this weekend in Berlin was on the topic of private feeds. Martijn and Sven made great progress by implemeting a flow to fetch private pages using various endpoints for tokens and authentication.
Apart from the question how to fetch private feeds, there is also the question how to present private feeds. The easiest way is probably to give every user their own feed, containing only the private posts for them. They can separately follow your public feed, and your queries are easier.
But in line with Silo’s like Twitter and Facebook, I think I would prefer presenting one feed, with both public and private posts, scoped for the authenticated user. When I described this to Aaron he said that he liked it, but that he didn’t know where to begin with writing code that does that. I didn’t either, but it made me want to explore the possibilities.
On a sidenote: this feed design also raises another problem, of how to signal to the user that they can see this post but no-one else. I leave that one for another time.
Drawing rough lines around boxes
Borrowing from Facebook, there are roughly four categories you can share content in:
- public – These posts can be seen by anyone. This is the default on nearly all IndieWeb sites today.
- authenticated – These posts can only been seen if you sign in, but, anyone can sign in. Facebook has this category and we can mimic that with IndieAuth, but it might not add that much value.
- friends only – This is a big category on Facebook, and made possible by the friendslist, which is also a big feature on Facebook.
- selected audience – Facebook also allows you to pick your audience on a per-post basis. This can be done by either selecting individual users, or selecting lists, which can contain users.
There is also the possibility of excluding specific people or lists from posts, but that one is even more advanced, so I put it out of scope for this exploration.
The first category is easy, for we already have it. The second category is harder, but once you got past the authentication it’s easy again. One could query a database for visibility = 'authenticated' OR visibility = 'public'
, that would work.
The third category would require us to keep a list of friends. The fourth category could also require us to keep lists of people, so it might be better to merge them.
Throw in some tables
This brings us to a simple database schema. I see three main tables: entries
, people
and groups
, with a pivot table between all of them: entry_group
, entry_person
and group_person
. I have chosen ‘people’ over ‘users’, because I might not want to give these people write access to anything, but they could be users as well.
It should work like this:
- Entries have a field for
visibility
, which can me markedpublic
,authenticated
orprivate
. - People can belong to groups, which have names. Think ‘Friends’, ‘Family’ and ‘Coworkers’.
- Entries can be opened up to individual people, or for a whole group.
There might be better ways of naming these, but I like the simplicity of this model. With private posts and audiences, I will always have to manage some form of lists, and this is the most simple way of doing it.
Enter the monster query
So, with some trial and error, PHPUnit tests, and a lot of Laravel magic I came to the following monster query for these tables:
(
select `entries`.*
from `groups`
inner join `group_person`
on `groups`.`id` = `group_person`.`group_id`
inner join `entry_group`
on `groups`.`id` = `entry_group`.`group_id`
inner join `entries`
on `entries`.`id` = `entry_group`.`entry_id`
where `group_person`.`person_id` = ?
)
union
(
select `entries`.*
from `entries`
inner join `entry_person`
on `entries`.`id` = `entry_person`.`entry_id`
where `entry_person`.`person_id` = ?
)
union
(
select *
from `entries`
where `visibility` = 'public'
)
order by `published_at` desc
... which is way shorter when expressed in with Laravel’s Eloquent:
class Person extends Model
{
public function timeline()
{
return $this->groups()
->join('entry_group', 'groups.id', '=', 'entry_group.group_id')
->join('entries', 'entries.id', '=', 'entry_group.entry_id')
->select('entries.*')
->union($this->entries()->select('entries.*'))
->union(Entry::whereVisibility('public'))
->orderBy('published_at', 'desc');
}
public function groups()
{
return $this->belongsToMany(Group::class);
}
public function entries()
{
return $this->belongsToMany(Entry::class);
}
}
Since the method timeline()
returns the Query object, other where-clauses can be appended when needed.
I am in a bit of a fight with Laravel still, for it adds
'`group_person`.`person_id` as `pivot_person_id`, `group_person`.`group_id` as `pivot_group_id`' to the first query, which makes it blow up, but the raw query works!
There is possibly a better way of doing it, but this is a start! Feel free to steal or improve, but if you improve, let me know.
Spent a good evening reading up on Reader-discussions and -ideas, then on refactoring the Microsub endpoint in Leesmap into separate Controllers. Very curious how Aaron this does in Aperture (which is also PHP/Laravel), but still not looking at his code until I'm done with it.
Do you know that feeling when you just get out of a rollercoaster and want more, more, more, but when you are being hoisted up in the cart, you're certainly unsure about why again? That's what I feel with IndieWebCamp Berlin right now. But I'm sure it will be fine once I'm there :)