Looking back at all the work I did over the last decade one thing was mostly constant: how my system was setup, which tools you could find on it and how I worked on projects. But things changed a little bit when I decided to do more work on my iPad. When traveling I usually do not write a lot of code, if any at all. But incidents happen or sometimes there is simply enough downtime to get some coding done between meetings, mentoring and architecture sessions. When traveling I usually focus on those three things and the iPad is the perfect device for me for all three of them. But writing code? I usually had a MacBook with me for that.
If your first thought reading this article is “everything comes around, he is talking about thin clients” you are not that far off. But first let us take a step back and talk about the things I did not particularly like about my old setup.
Dependency management. Jumping between legacy projects and greenfield / explorative projects usually means different versions of a compiler or interpreter, different versions of libraries, sometimes incompatible with each other and service dependencies in various versions like databases or key value stores. There are some solutions to this problem, but none that are mostly seamless and pain free to use. And even after those problems are solved you might not be able to run the production version of a dependency on your laptop which means you either end up with some form of Docker or VM setup or you yolo through development and hope the CI does its job well.
Reliance on one system. I obviously backup data, configs and usually the whole system. And document how to setup a project. While this might not be the case for every project you will ever touch, it should be. But what happens when your system is damaged or stolen while you are traveling? You likely buy a new one and start setting up everything as good as you can – maybe you are lucky and got a backup with you. If you do not trust your own hardware you can provision an instance hosted by one of the larger cloud providers.
Keeping systems in sync. You would expect this to be a solved problem considering all the options to synchronise files across multiple devices. But the moment you add caching and database systems, maybe an S3 compatible server to test against the actual API things get a lot trickier. I usually synced my MacBook, which was unused for weeks somewhere on a shelve, a few days before traveling. You read that right, I had a MacBook which I touched three to four times a year. I do not work on the couch or in the living room. I might read a bit, pull up a code review, maybe draw an architecture diagram from time to time, but I do not sit on the couch coding. When I write code I am in my office in front of my iMac. With the screen positioned at a suitable high, on a proper chair with all the bells and whistles to make it as comfortable and ergonomic as it can be.
When macOS Catalina was released I did a clean installation on my iMac Pro and promised myself to not install project related software or dependencies on it anymore.
So I started out with the most obvious solution. PyCharm with a remote interpreter setup. With one VM powered by Parallels for each project running the Linux distribution and dependencies mirroring production. Now some might be curious why I opted for Parallels and not Docker. I am more comfortable using a real VM, it is way easier for me to configure, work with, snapshot – including the filesystem – and resource usage was roughly the same. Parallels is shockingly good at managing resources. Also I am working on a way to powerful iMac Pro for simply putting text in a file, resources are plenty and if they were not it would be cheaper to acquire more resources than spend my time on bending Docker to my will.
This worked nicely for a side project and the Node.js stack we are using at Nurx. But it tied me to my iMac, which was not optimal considering that I will surely not ship it to where I travel. So I had to figure out how to write code on my iPad.
One solution many people figured out before me is pretty simple, especially when you wrote code while the year started with a 1. Vim. Well, a slightly newer fork – NeoVim. Together with CoC which brings the Language Server Protocol to NeoVim it seemed like a pretty good choice. And it is. Together with Blink the experience is actually really nice. A few days in I was starting to remember why I did not like vim when more complex plugins were being used – debugging why it stopped working is a nightmare. I already see the die hard vim fans lining up to tell me this either never happens or that I should not be using plugins at all – good points, thanks for the input, moving on.
I started to explore browser based IDEs. I knew about Eclipse Che and if this would be the answer I would be ordering a 16″ MacBook Pro right now. Other options were worse in some aspects. But someone, somewhere did something smart. They took a desktop editor which can be bloated with plugins to behave like an IDE build with web technology and made it possible to run it as daemon and connect to it via a browser. code-server brings VSCode to the browser. In nearly all its glory. And it actually works really well.
Here is the summary on how you use it:
- download latest version
- start it and point it to your project directory
- there is no step three
It is simple. It is powerful. It is well supported with a ton of plugins. And it is actively being maintained. Even Microsoft is working on making VSCode viable from anywhere. The integrated terminal means I do not have to run two apps side by side on the iPad to work on code, but can simply keep a browser open.
This seems like a pretty decent setup to work with. But it still forced me to keep two systems in sync. I tried using code-server as main development environment for a bit, but constantly switching to the correct browser tab or running a different browser for easy access was a bit annoying.
The good news is that Visual Studio Code with its remote capabilities solves all of this. I can connect to the exact same box code-server is running on and once settings sync is setup keeping the editor configuration the same should be fairly straight forward.
Moving to Visual Studio Code was nearly painless. It does not really use much more resources than a Java based IDE – what a benchmark, I know. IntelliSense works really well except for Djangos models which, thanks to an obscene amount of meta programming, always causes some problems for completion providers and I am still impressed how well JetBrains solved this problem. Debugging is not implemented as nicely as you are used to by some IDEs, but more than functional enough and it is improving. Overall I am happy with it. I always eye NeoVim and think it would make things a bit easier, but somehow the VSC setup feels good, so “meh”.
Let us talk about accessories for a bit. You want a keyboard and a mouse. Thanks to iPad OS you can use a mouse and for the sake of comfort and usability you want one. The cursor is stupidly large, I hope they will introduce an option to make it more like a regular one at some point. Other than that it works really well. Except when you want to use a Magic Mouse which needs like five additional steps to pair and is horrible to use – makes sense, right? I chose to get another Logitech MX Ergo, I really like trackballs and this one is amazing. As keyboard I chose a Magic Keyboard. Works really well, it is light and easier to transport. I tried the Smart Folio Keyboard for the iPad, but I neither like the tilt when the iPad is standing in it nor the feel of the keys. As a stand I am using the standard Smart Folio which I like from a viewing angle and can easily put it on a box or some books to elevate it when I feel like it. Luckily all three choices are easily replaceable with whatever you prefer.
There is another reason I like this approach, even if it was not the driving factor. Cost. A decent server for the basement in my basement, regularly updated is cheaper – as in total cost of ownership – than an iMac Pro or Mac Pro. I do not believe Apples hardware is overpriced, it provides a lot more than what the server needs in this case. The server needs CPUs, memory and storage. Fast and a lot. I currently run 40 cores with 256GB memory, an SSD raid and one NVMe. This is is fast. It has more resources than most people need. I can run a whole production stack of some startups from memory, including all the data. If you do not want to host hardware some instance somewhere in the cloud might do the trick and with only running it while you actually work this can be relatively cheap.
Now comes the tricky part – what would I use as desktop? I could actually use the server itself. I could connect my iPad to an external screen, but sadly this messes with the resolution and keeps the iPads aspect ratio which is annoying to look at and I would prefer some window management capabilities for more than two apps for my primary driver, even when I primarily work with one or two windows on screen – Samsung DeX handles this pretty nicely. I could use an Asus ChromeBit. Maybe Apple comes around and fixes all annoyances when connecting an iPad to a larger screen.
As long as I have to occasionally work on iOS and Android projects my iMac Pro is here to stay. The “iPad coding setup” works extremely well for web development and some data / reporting / analytics related projects – the Jupyter client Juno Connect is amazing – but I currently do not see any nice way to get full desktop access on a remote system. This includes VNC, Jump Desktop and RDP. Believe me, I tried.
So how did this setup treat me so far? Excellent. I have been using it for over three months now. I spent roughly 10 days working only with the iPad which equals one long business trip. I never felt limited in any way and even tried to put more time in coding to see if I can find any problems when using it for longer periods than I expect my regular use to be. In the long run I might consider provisioning a VPS before traveling (maybe I can setup a LXC sync with proxmox – which is currently powering my VMs) just in case something happens at home and my VPN is down or the server is literally on fire. I would prefer to not have to re-provision my environment during a trip, which I could since all configs are in git and accessible to me, but this would also require VPN access to my home network, so the single point of failure is the same. Since most work is happening in the browser latency was not really an issue. Can I recommend this setup to everyone? The answer is clearly “no”. But I can say it is more than functional and might work well for you if you give it a try.
There are some topics which can can cause a very controversional discussion when you put software engineers in a room. Vim vs Emacs. Favorite Linux distribution. Generics and static typing. But one thing a significant amount seem to agree on is that a multi screen setup is a requirement to be productive.
Over the last 24 years I looked at many computer screens. Different kinds, different sizes and obviously a different amount of screens. And for a long time I would have agreed that you should run a multi screen system to increase your productivity. There are blog posts, studies and anecdotes all over the net advocating for multiple screens.
My first dual screen setup was a mix of an 17″ and 15″ CRT. Later on 19″ and 17″ LCD. At some point dual 24″. With the 2015 MacBook Pro I moved to dual 27″ and when upgrading to the iMac Pro to triple 27″. But something changed for me. I actually got rid – read: upgraded my parents offices – of two LG Ultrafine 5k and now only work with my 5k iMac Pro.
The most important part in the above paragraph is for me. I did not conduct a scientific study. I did not run multiple tests. I reduced my setup one screen at a time, working two to four weeks with a setup to see if anything changed. Your mileage may vary.
The physical change is pretty obvious: I do not move my head all day. At some point I used anything but the iMac as an “informative” screen in my peripheral vision and when something happened I moved the window to my main screen. Not moving my head a lot during the day is actually more pleasant – who would have thought?
From a productivity perspective I made some changes to account for the lack of a second and third screen:
I never kept email, chat, etc open all day to immediately react to every single message. Before I had them sitting on the left screen but simply did not look at them, so my availability is the same. If I am being pinged or get a notification I can choose to open the app or move to the virtual screen with one click. I can catch up with the random and music channel at any other time, I do not need to know that there is activity.
I often had a web browser with documentation open on the right screen. Now I mostly just Command-Tab to my browser, read what’s relevant and Command-Tab back to my editor. It is rare that I need the editor and documentation at the same time, but on 27″ 5k I can fit both next to each other if I really need it. Thanks to Magnet this is only a matter of a few key strokes.
I have my 12.9″ iPad Pro next to my iMac and played a bit with Sidecar, but I mostly use it to move apps to the iPad when I want to use the Apple Pencil. Initially I thought it might be a nice compromise if I need some docs open, but I find it more comfortable to rearrange windows than moving my head.
Overall 27″ simply proved to be large enough to comfortably work with a single screen setup. Having a multi screen system is something I setup out of habit. And I believe with smaller screens this still makes a lot of sense. But hardware evolved into large screens. (A single 34″ slightly curved widescreen might be the sweet spot – not a lot of head movement, no bezels and a little bit more vertical space to fit a small window in for some convenience and a little bit less rearranging.)
Why am I writing this post? To enlighten people on how to be productive? To advocate minimalism? To kick off a long study of productivity related to size and number of screens? Nope, nothing fancy or hip. Just a reminder – primarily for myself – to reevaluate old believes and habits from time to time.
It looks like Apple started rejecting applications build using Electron in the AppStore review process. Chromium, the foundation of Electron, is using private APIs, something Apple, to my knowledge, always was pretty clear about is a no-go for the AppStore. Judging by Firefox using those private APIs is a requirement to get the browser more performant and battery efficient.
I think the link summarises the problem pretty well and to me the biggest problem here is the inconsistency of the AppStore review process and the fact that Apple is willing to bend or ignore the rules for companies of a certain size.
If an application is not conforming to the AppStore guidelines I expect it to be rejected. With private API usage this has to be the case for all Electron based applications. But it only hit a few. And bigger players like Slack get away with it. They either were not caught yet or it was ignored. Your guess will be as good as mine. Maybe it will hit them at some point, maybe it will not. And this is the real problem.
I like the AppStore. I fully understand all the criticism it gets from developers. Having shipped apps to it and having jumped through some hoops to get an app featured it is hard to deny any of the valid points people regularly bring up. But from a user perspective the AppStore is amazing.
In a perfect world one of the things that make the AppStore amazing would be the review process. Chance for malware? Zero. Chance for an app breaking my system? Zero. Chance for being scammed and not being able to get my money back? Zero.
In a perfect world. In the world we live in most of the things I mentioned above are often the case. And some bad – as in do not follow the rules – apps slip through.
To get closer to a perfect world there has to be some rules and Apple laid them out. They did not necessarily create tooling making it easier to conform to them and require developers to actually understand the whole stack, frameworks and libraries they are using. This is relatively simple if you write an app in Swift and primarily stick to Apples SDK. It gets a lot harder and I would argue nearly impossible if you use something like Electron.
The big problem to me is not that Apple is banning some Electron apps because they violate the private API rule. They are enforcing a rule they put in place for a likely very good reason.
To me the real problem is that Apple is enforcing a rule we all know a lot of apps still being live in the AppStore violate, but action is only taken against a few. As long as they lack consistency one of the most critical things that would make the AppStore amazing is lacklustre.
One of my go to tools for nearly every application, Sentry, changed their license to BSL like CockroachDB did some time ago. This obviously sparked a discussion about Open Source Software, sustainable business models and as expected there was someone claiming Sentry got big on OSS and now betrays everyone. Armin Ronacher posted a good take on the whole situation which is also worth reading.
Personally I appreciate Sentry moving to a license which protects the company from AWS and the likes. Chances that they simply start offering a hosted service for a price a sustainable company cannot compete with are too high – and we have seen this happen over and over in the past. Having some protection and a sustainable business means the software will continue to exist, evolve and serve me well. And considering that after three years the code transitions to an Apache 2.0 license makes the whole thing even better.
Open Source hardliners rightfully claim that Sentry cannot be considered OSS anymore and I agree. And they are quite open about it, even if they try to sugar coat it in half of a sentence.
Although we’ve come to refer to the BSL as eventually open-source since it converts to an OSI-approved license at the conversion date, due to the grant restriction, it is formally not an open-source license.Sentry announcement post
But the impact is what I am interested in and it basically does not exist. I can still self host Sentry. I can still look at the code if I run into some strange bug I cannot explain – talking about you SAML and Google SSO integration. I appreciate and support OSS as good as I can, but I also understand the business interest of protecting a company and making sure the development of a project which grants users so much freedom is done in a sustainable way. SaaS and hosted services changed the game for OSS, but we only start to see reactions to this change the last few years.
There is only one part I strongly disagree with.
The BSL lets us hit our goals in a clean and low-impact way: it won’t change anyone’s ability to run Sentry at their companySentry announcement post
Most companies I worked with all had a set of permissible software licenses. This means that any library or software releases on a permissible license can be used – obviously with some common sense and most likely some form of internal approval of the engineering team or leads – without additional approval. One of the most straight forward ways I have seen is approving all licenses which can be included in an ASF project.
With the BSL not meeting this criteria many teams will now need their legal department to review the BSL and explicitly approve it. Depending on the company size and the legal team this can become a multi-month process. A painful multi-month process. This is a side effect which hopefully is only a temporary problem. At some point the BSL will likely become common enough that lawyers know about them and have a good understanding of the intent and reasoning, so approving them becomes as straight forward as approving an MIT license.
After reading some of the discussion I feel like I should add that I do not believe in a secret agenda. “They try to push people from self hosting to their expensive, hosted service” Yeah… Just no. There would be far better ways to do this. And the only difference is, worst case, some approval process you have to go through. This would actually be the worst execution of such a plan and having talked to some people from the Sentry team I am certain they would not be too stupid to execute such a plan if they wanted to.
OSS projects with a commercial offering need a way to protect their company and income. In my opinion Sentry is doing this in the best possible way, especially since it hardly changes anything for its current and future users. The only thing I would advice you to do if you self host Sentry is running the BSL by your legal team for approval. Otherwise the next audit might have some unnecessary unpleasant conversations.
Earlier this year I made a drastic change to my online presence, stopped using my static site generator and migrated all my content to WordPress. Back in 2011 I was more than convinced that WordPress is basically dead and should never be used for anything. It is funny how times and opinions change. But I also set out with a very specific goal of publishing more content and owning more of the content I produce. Back then my idea was still a bit vague of how I want to run my online presence, but with some time and a few people complaining about and/or praising micro.blog I started to get a pretty good idea on how to move forward.
The idea of micro.blog (and micro blogging in general) brings back fond memories. A long, long time ago when I started blogging, before it was even called a blog, before RSS 0.9 was published, I published things to the Internet. Often they only consisted of a few sentences, sometimes an image, in rare occasions some long form writing. Back then you rented some webspace or signed up for Geocities, put some visitor counter on the page and searched for some gif that acted as an eye catcher to make the site appear interesting. And you started publishing things.
As time changed so did expectations. Random short form ramblings moved on to Twitter. Facebook and other centralised platforms and blogs felt reserved for meaningful, in depth content. And slowly but surely most content was produced on platforms which focus on generating money for their investors. “1/x” was born. Because 34 tweets is the preferred way to convey information over a simple blog post.
This felt like a natural progression. Hosting content and publishing content to a platform on which you actually own the content is hard. Signing up for Twitter is not. Growing an audience on your own little piece of the Internet is really hard. Being sarcastic in 140 characters and finding people to retweet your ramblings is not. Platforms like Twitter provide an experience tailored to people who want to publish content and get immediate feedback and have an actual chance of growing their audience. The experience is frictionless, this is why it became popular and won so many users in such a short time.
Micro.blog is a solution that brings all of that together. It is mostly frictionless, – adding your own domain still takes a few more clicks, but more on that in a moment – allows you to own your content, you can grow your audience and it allows all the (nearly) real time interactions that make you feel good. Who does not like random Internet points in the form of likes or retweets?!
Yet there is a lively discussion about content ownership going on. Do you own your content when it is reachable via a domain you own? Or do you have to own the publishing platform to actually own the content? (Spoiler: This will be a vim vs emacs discussion – there will be no winner and a few people will take it personally.)
In my opinion it is critical that you can bring your own domain. And have a backup of your content. Sure, micro.blog could go out of business tomorrow and you will have some down time, maybe lose a post or two. But you can always put your content somewhere else, update your DNS, maybe rewrite some URLs and you are good again. This obviously requires some technical knowledge, which means it will not be a viable solution for most people who joined Twitter for the frictionless experience.
Having to self host a platform for true content ownership just opens up so many problems, including discoverability and growing your audience that it will not be a feasible option for most. Mastodon is a good example for this. Federation build into the product, yet most interactions seem to happen on the instance you sign up for until a boosted toot from another instance accidentally shows up in your local or private timeline. But overall discoverability is still bad and you are forced to rely on the instance owner.
In my opinion micro.blog strikes the right balance between content ownership, interactions, discoverability and usability for the standard user. There is nearly no friction and it is accessible enough to allow mass adoption. One thing we should not forget is that not everyone cares about content ownership. There are people who are happy to just post random things and sometimes interact with others. Which is also totally fine, we should not project our values on them, and micro.blog is doing the right thing: sign up, start using it, no third step required.
Personally I prefer ownership of my content in the sense that I only have to rely on my hosting provider and worst case simply move to another one. And I honestly have more faith in my provider than a niche company to keep my site online for years. If it would be possible to self host micro.blog I would seriously consider it, the apps and the way replies and interactions are handled are really nice. But since this is not an option it means back to the drawing board.
One thing became pretty clear to me throughout this year: I want a place to post short form content – like micro.blog… but not micro.blog.
Once I understood what I want, it was time to figure out how to achieve it. I was actually entertaining the idea of setting up micro.blog on a subdomain and while I did not completely discard this idea, it would be $5 a month for something my main site can already do and it would even be consistent with my design.
Another thought was making everything part of the main page, but I could see that two or three posts in the RSS feed for a random photo, a link and maybe a long form article talking about securing a production environment on AWS might not go well together.
Some form of mixed solution would be a separate category rendered as timeline which is excluded from the main feed and index page.
Solving this problem is not hard, I just have to figure out which will work best, which means some educated guess work based on unreliable data and unproven assumptions. Very scientific, I know, but hey, this is not my job and not production critical.
One thing I would like to set up is cross posting content to other networks. There are plugins for WordPress itself and there are the often preferred solutions like IFTTT. Micro.blog actually makes it stupidly simple to cross post, you just have to enter your feed URL. Twitter and Instagram are a bit more tricky from what I can tell, but it should be doable. For Instagram the other way around might make more sense – post to Instagram and pull the content into the blog.
Considering all of this, there are two things that would be missing: some form of reactions and a better way to post content. The latter is especially important to me for short form content. If there is too much friction I will most likely lose interest pretty fast. And with WordPress‘ recent changes the editing experience already took a big hit.
Both requirements could be solved with plugins. The Micropub plugin allows me to use a few really well built clients from what I can tell. To replace tracebacks – they obviously need to be replaced, right? – there is the Webmention plugin and Semantic Linkbacks. A long time ago I removed the comment section on this blog and I still believe this was a good idea. And I think webmentions will fall victim to the same spam practices and abusive behaviour as most free form comment fields on the Internet. But I like the fact that there are standards and a way if I ever change my mind and that people who like comments and mentions have a way to get them on their own site.
In the end it does not really matter where you land on the content ownership discussion. There is a way to own your content in a way that allows you to still reap the benefits of centralised platforms without being too dependent on them. Those platforms will always have some form of control over the content and your presence on them. Your account might get locked, your content might get deleted, but since it would only be a mirror or an excerpt with a link, your actual content will stay online. (A whole other topic I do not even want to scratch are ethics and longevity of those platforms, this deserves a separate post.) I would prefer to see a more decentralised web again, but large platforms like Twitter and Facebook are here to stay for the above mentioned reasons. What we can do to mitigate their impact slightly is treating them as a commodity to distribute content hosted elsewhere. In the short term such a hybrid model is likely the only promising way to regain ownership of our content without major impact in audience, reach and interactions.