There are some topics which can can cause a very controversional discussion when you put software engineers in a room. Vim vs Emacs. Favorite Linux distribution. Generics and static typing. But one thing a significant amount seem to agree on is that a multi screen setup is a requirement to be productive.
Over the last 24 years I looked at many computer screens. Different kinds, different sizes and obviously a different amount of screens. And for a long time I would have agreed that you should run a multi screen system to increase your productivity. There are blog posts, studies and anecdotes all over the net advocating for multiple screens.
My first dual screen setup was a mix of an 17″ and 15″ CRT. Later on 19″ and 17″ LCD. At some point dual 24″. With the 2015 MacBook Pro I moved to dual 27″ and when upgrading to the iMac Pro to triple 27″. But something changed for me. I actually got rid – read: upgraded my parents offices – of two LG Ultrafine 5k and now only work with my 5k iMac Pro.
The most important part in the above paragraph is for me. I did not conduct a scientific study. I did not run multiple tests. I reduced my setup one screen at a time, working two to four weeks with a setup to see if anything changed. Your mileage may vary.
The physical change is pretty obvious: I do not move my head all day. At some point I used anything but the iMac as an “informative” screen in my peripheral vision and when something happened I moved the window to my main screen. Not moving my head a lot during the day is actually more pleasant – who would have thought?
From a productivity perspective I made some changes to account for the lack of a second and third screen:
I never kept email, chat, etc open all day to immediately react to every single message. Before I had them sitting on the left screen but simply did not look at them, so my availability is the same. If I am being pinged or get a notification I can choose to open the app or move to the virtual screen with one click. I can catch up with the random and music channel at any other time, I do not need to know that there is activity.
I often had a web browser with documentation open on the right screen. Now I mostly just Command-Tab to my browser, read what’s relevant and Command-Tab back to my editor. It is rare that I need the editor and documentation at the same time, but on 27″ 5k I can fit both next to each other if I really need it. Thanks to Magnet this is only a matter of a few key strokes.
I have my 12.9″ iPad Pro next to my iMac and played a bit with Sidecar, but I mostly use it to move apps to the iPad when I want to use the Apple Pencil. Initially I thought it might be a nice compromise if I need some docs open, but I find it more comfortable to rearrange windows than moving my head.
Overall 27″ simply proved to be large enough to comfortably work with a single screen setup. Having a multi screen system is something I setup out of habit. And I believe with smaller screens this still makes a lot of sense. But hardware evolved into large screens. (A single 34″ slightly curved widescreen might be the sweet spot – not a lot of head movement, no bezels and a little bit more vertical space to fit a small window in for some convenience and a little bit less rearranging.)
Why am I writing this post? To enlighten people on how to be productive? To advocate minimalism? To kick off a long study of productivity related to size and number of screens? Nope, nothing fancy or hip. Just a reminder – primarily for myself – to reevaluate old believes and habits from time to time.
It looks like Apple started rejecting applications build using Electron in the AppStore review process. Chromium, the foundation of Electron, is using private APIs, something Apple, to my knowledge, always was pretty clear about is a no-go for the AppStore. Judging by Firefox using those private APIs is a requirement to get the browser more performant and battery efficient.
I think the link summarises the problem pretty well and to me the biggest problem here is the inconsistency of the AppStore review process and the fact that Apple is willing to bend or ignore the rules for companies of a certain size.
If an application is not conforming to the AppStore guidelines I expect it to be rejected. With private API usage this has to be the case for all Electron based applications. But it only hit a few. And bigger players like Slack get away with it. They either were not caught yet or it was ignored. Your guess will be as good as mine. Maybe it will hit them at some point, maybe it will not. And this is the real problem.
I like the AppStore. I fully understand all the criticism it gets from developers. Having shipped apps to it and having jumped through some hoops to get an app featured it is hard to deny any of the valid points people regularly bring up. But from a user perspective the AppStore is amazing.
In a perfect world one of the things that make the AppStore amazing would be the review process. Chance for malware? Zero. Chance for an app breaking my system? Zero. Chance for being scammed and not being able to get my money back? Zero.
In a perfect world. In the world we live in most of the things I mentioned above are often the case. And some bad – as in do not follow the rules – apps slip through.
To get closer to a perfect world there has to be some rules and Apple laid them out. They did not necessarily create tooling making it easier to conform to them and require developers to actually understand the whole stack, frameworks and libraries they are using. This is relatively simple if you write an app in Swift and primarily stick to Apples SDK. It gets a lot harder and I would argue nearly impossible if you use something like Electron.
The big problem to me is not that Apple is banning some Electron apps because they violate the private API rule. They are enforcing a rule they put in place for a likely very good reason.
To me the real problem is that Apple is enforcing a rule we all know a lot of apps still being live in the AppStore violate, but action is only taken against a few. As long as they lack consistency one of the most critical things that would make the AppStore amazing is lacklustre.
One of my go to tools for nearly every application, Sentry, changed their license to BSL like CockroachDB did some time ago. This obviously sparked a discussion about Open Source Software, sustainable business models and as expected there was someone claiming Sentry got big on OSS and now betrays everyone. Armin Ronacher posted a good take on the whole situation which is also worth reading.
Personally I appreciate Sentry moving to a license which protects the company from AWS and the likes. Chances that they simply start offering a hosted service for a price a sustainable company cannot compete with are too high – and we have seen this happen over and over in the past. Having some protection and a sustainable business means the software will continue to exist, evolve and serve me well. And considering that after three years the code transitions to an Apache 2.0 license makes the whole thing even better.
Open Source hardliners rightfully claim that Sentry cannot be considered OSS anymore and I agree. And they are quite open about it, even if they try to sugar coat it in half of a sentence.
Although we’ve come to refer to the BSL as eventually open-source since it converts to an OSI-approved license at the conversion date, due to the grant restriction, it is formally not an open-source license.Sentry announcement post
But the impact is what I am interested in and it basically does not exist. I can still self host Sentry. I can still look at the code if I run into some strange bug I cannot explain – talking about you SAML and Google SSO integration. I appreciate and support OSS as good as I can, but I also understand the business interest of protecting a company and making sure the development of a project which grants users so much freedom is done in a sustainable way. SaaS and hosted services changed the game for OSS, but we only start to see reactions to this change the last few years.
There is only one part I strongly disagree with.
The BSL lets us hit our goals in a clean and low-impact way: it won’t change anyone’s ability to run Sentry at their companySentry announcement post
Most companies I worked with all had a set of permissible software licenses. This means that any library or software releases on a permissible license can be used – obviously with some common sense and most likely some form of internal approval of the engineering team or leads – without additional approval. One of the most straight forward ways I have seen is approving all licenses which can be included in an ASF project.
With the BSL not meeting this criteria many teams will now need their legal department to review the BSL and explicitly approve it. Depending on the company size and the legal team this can become a multi-month process. A painful multi-month process. This is a side effect which hopefully is only a temporary problem. At some point the BSL will likely become common enough that lawyers know about them and have a good understanding of the intent and reasoning, so approving them becomes as straight forward as approving an MIT license.
After reading some of the discussion I feel like I should add that I do not believe in a secret agenda. “They try to push people from self hosting to their expensive, hosted service” Yeah… Just no. There would be far better ways to do this. And the only difference is, worst case, some approval process you have to go through. This would actually be the worst execution of such a plan and having talked to some people from the Sentry team I am certain they would not be too stupid to execute such a plan if they wanted to.
OSS projects with a commercial offering need a way to protect their company and income. In my opinion Sentry is doing this in the best possible way, especially since it hardly changes anything for its current and future users. The only thing I would advice you to do if you self host Sentry is running the BSL by your legal team for approval. Otherwise the next audit might have some unnecessary unpleasant conversations.
Earlier this year I made a drastic change to my online presence, stopped using my static site generator and migrated all my content to WordPress. Back in 2011 I was more than convinced that WordPress is basically dead and should never be used for anything. It is funny how times and opinions change. But I also set out with a very specific goal of publishing more content and owning more of the content I produce. Back then my idea was still a bit vague of how I want to run my online presence, but with some time and a few people complaining about and/or praising micro.blog I started to get a pretty good idea on how to move forward.
The idea of micro.blog (and micro blogging in general) brings back fond memories. A long, long time ago when I started blogging, before it was even called a blog, before RSS 0.9 was published, I published things to the Internet. Often they only consisted of a few sentences, sometimes an image, in rare occasions some long form writing. Back then you rented some webspace or signed up for Geocities, put some visitor counter on the page and searched for some gif that acted as an eye catcher to make the site appear interesting. And you started publishing things.
As time changed so did expectations. Random short form ramblings moved on to Twitter. Facebook and other centralised platforms and blogs felt reserved for meaningful, in depth content. And slowly but surely most content was produced on platforms which focus on generating money for their investors. “1/x” was born. Because 34 tweets is the preferred way to convey information over a simple blog post.
This felt like a natural progression. Hosting content and publishing content to a platform on which you actually own the content is hard. Signing up for Twitter is not. Growing an audience on your own little piece of the Internet is really hard. Being sarcastic in 140 characters and finding people to retweet your ramblings is not. Platforms like Twitter provide an experience tailored to people who want to publish content and get immediate feedback and have an actual chance of growing their audience. The experience is frictionless, this is why it became popular and won so many users in such a short time.
Micro.blog is a solution that brings all of that together. It is mostly frictionless, – adding your own domain still takes a few more clicks, but more on that in a moment – allows you to own your content, you can grow your audience and it allows all the (nearly) real time interactions that make you feel good. Who does not like random Internet points in the form of likes or retweets?!
Yet there is a lively discussion about content ownership going on. Do you own your content when it is reachable via a domain you own? Or do you have to own the publishing platform to actually own the content? (Spoiler: This will be a vim vs emacs discussion – there will be no winner and a few people will take it personally.)
In my opinion it is critical that you can bring your own domain. And have a backup of your content. Sure, micro.blog could go out of business tomorrow and you will have some down time, maybe lose a post or two. But you can always put your content somewhere else, update your DNS, maybe rewrite some URLs and you are good again. This obviously requires some technical knowledge, which means it will not be a viable solution for most people who joined Twitter for the frictionless experience.
Having to self host a platform for true content ownership just opens up so many problems, including discoverability and growing your audience that it will not be a feasible option for most. Mastodon is a good example for this. Federation build into the product, yet most interactions seem to happen on the instance you sign up for until a boosted toot from another instance accidentally shows up in your local or private timeline. But overall discoverability is still bad and you are forced to rely on the instance owner.
In my opinion micro.blog strikes the right balance between content ownership, interactions, discoverability and usability for the standard user. There is nearly no friction and it is accessible enough to allow mass adoption. One thing we should not forget is that not everyone cares about content ownership. There are people who are happy to just post random things and sometimes interact with others. Which is also totally fine, we should not project our values on them, and micro.blog is doing the right thing: sign up, start using it, no third step required.
Personally I prefer ownership of my content in the sense that I only have to rely on my hosting provider and worst case simply move to another one. And I honestly have more faith in my provider than a niche company to keep my site online for years. If it would be possible to self host micro.blog I would seriously consider it, the apps and the way replies and interactions are handled are really nice. But since this is not an option it means back to the drawing board.
One thing became pretty clear to me throughout this year: I want a place to post short form content – like micro.blog… but not micro.blog.
Once I understood what I want, it was time to figure out how to achieve it. I was actually entertaining the idea of setting up micro.blog on a subdomain and while I did not completely discard this idea, it would be $5 a month for something my main site can already do and it would even be consistent with my design.
Another thought was making everything part of the main page, but I could see that two or three posts in the RSS feed for a random photo, a link and maybe a long form article talking about securing a production environment on AWS might not go well together.
Some form of mixed solution would be a separate category rendered as timeline which is excluded from the main feed and index page.
Solving this problem is not hard, I just have to figure out which will work best, which means some educated guess work based on unreliable data and unproven assumptions. Very scientific, I know, but hey, this is not my job and not production critical.
One thing I would like to set up is cross posting content to other networks. There are plugins for WordPress itself and there are the often preferred solutions like IFTTT. Micro.blog actually makes it stupidly simple to cross post, you just have to enter your feed URL. Twitter and Instagram are a bit more tricky from what I can tell, but it should be doable. For Instagram the other way around might make more sense – post to Instagram and pull the content into the blog.
Considering all of this, there are two things that would be missing: some form of reactions and a better way to post content. The latter is especially important to me for short form content. If there is too much friction I will most likely lose interest pretty fast. And with WordPress‘ recent changes the editing experience already took a big hit.
Both requirements could be solved with plugins. The Micropub plugin allows me to use a few really well built clients from what I can tell. To replace tracebacks – they obviously need to be replaced, right? – there is the Webmention plugin and Semantic Linkbacks. A long time ago I removed the comment section on this blog and I still believe this was a good idea. And I think webmentions will fall victim to the same spam practices and abusive behaviour as most free form comment fields on the Internet. But I like the fact that there are standards and a way if I ever change my mind and that people who like comments and mentions have a way to get them on their own site.
In the end it does not really matter where you land on the content ownership discussion. There is a way to own your content in a way that allows you to still reap the benefits of centralised platforms without being too dependent on them. Those platforms will always have some form of control over the content and your presence on them. Your account might get locked, your content might get deleted, but since it would only be a mirror or an excerpt with a link, your actual content will stay online. (A whole other topic I do not even want to scratch are ethics and longevity of those platforms, this deserves a separate post.) I would prefer to see a more decentralised web again, but large platforms like Twitter and Facebook are here to stay for the above mentioned reasons. What we can do to mitigate their impact slightly is treating them as a commodity to distribute content hosted elsewhere. In the short term such a hybrid model is likely the only promising way to regain ownership of our content without major impact in audience, reach and interactions.
I believe every single time I flagged this during an audit I had some people of a team go nearly ballistic. How could I only dare to suggest wasting precious time in such a stupid way? Do I not know that this prevents the team from getting things done? It surely has to be an anti pattern for an agile scrum waterfall team! And yet… it is one of the things that will regularly come up during an audit when people start looking at your processes and procedures.
People who had this discussion will likely point out that this is relevant for the whole company and not engineering specifically. It’s also obviously part of the change management process. And they are right.
But it would not be a 101 if I would not try to deliver this in small, digestible chunks which do not make you hate a potential auditor before they even enter the office. So we will focus on engineering this time.
For many teams it is not even a discussion if there will be a review of a merge request (or pull request, depending on which centralised hosting provider you chose for your decentralised version control system), because in the end this is best practice, right? And it is a perfect example of the four eye principle. One engineer is implementing changes, the other is verifying that they will not blow up the whole system the moment they are being pushed to production.
But the readiness to commit to the four eye principle often ends right there. How are SQL queries being approved before they run against your database? How do you ensure infrastructure changes are not destructive? And obviously: How can you ensure that none of the two examples above causes a vulnerability or data leak?
While the avid reader might now hold their breath and wait for me to explain the one and only true solution to solve all of those problems, the more experience reader likely sheds a tear – knowing that this is a hard problem without a good solution.
You obviously want the process to be as light as possible (so you are not waiting days for approval to run
SELECT COUNT(id) FROM users). You also want to make sure you have some form of log in place in case something goes wrong so you can track what happened. Obviously you need exceptions during an emergency, but you still need the log so you can audit all steps taken after the incident was resolved. And those three are only the bare essential requirements to not fail an audit immediately. Thankfully those are also the most important things in case of less regulated domains.
While there might not be a silver bullet, there is at least some advice that works will enough for most early stage startups: Track everything in your version control system.
This might not sound very sophisticated. It surely requires discipline since you do not have any safeguards in place to prevent people from just ignoring this part of the process. But it is better than nothing. A lot better actually. So how does this look in practice?
One way to approach this is by creating a new repository with one file per technical domain. Infrastructure, database, data warehouse,… and simply open merge requests against the file. Approval works the same as for code reviews. In case of an incident you just copy and paste what you enter into the file and open the merge request after the incident is resolved. You should obviously pair with someone when you are working on a critical piece of your stack to have another pair of eyes on what you are doing.
I would argue this is not a lot of work, but you might rightfully ask what it buys you beside some points during a potential audit. Well: It buys you a safety net. We all do something stupid from time to time. We all make mistakes. Having another pair of eyes on the problem means we have a chance to catch them before they become a problem. Having partially reconstructed a database from backups at two in the morning – yay for whitelisting people to punch through DnD – I can tell you that waiting five minutes to execute a command in production is a lot less painful than the consequences of one single mistake.
Except being a safety net this is a huge compliance factor for a good number of certifications and accreditations. In many cases it will likely not be sufficient though. Having single individuals being able to do whatever they want without controls and approval can become a problem at some point. Especially when those individuals have access to customer data and are about to leave the company because they feel like they were treated badly or someone else is willing to pay well for the information.
Consider four eyes on everything a practice that scales well with your team and enables you to fulfil new requirements and enable additional controls when time comes – without large scale changes to your overall workflow.
If you want to follow this article series you can either subscribe to the general RSS feed or to the tag specific one if you only care about startup security posts.