Calculate wind from thermals

Category : python shell script web

This post is a followup to the last one about Paragliding data gems.

We have collected lots of flights and their GPS location data. From this, several million thermals were extracted and shown on a heatmap. A step forward is to classify these thermals into meaningful groups. Some parameters are easy to extract. For example:

  • Time of the day
  • The month of the year
  • The year
  • Change in altitude
  • Vertical velocity

I have shown in the previous post how the time of the day affects where a thermal is located. The other parameters are also nice to play around with.

One very interesting parameter would be the wind situation. As every pilot knows, the wind plays a crucial role in where turbulences occur and good conditions can be expected.

Finding the wind in our data

The wind consists of two values: Windspeed and direction. And because both values are surprisingly different depending on the height, the wind could be calculated several times. For the sake of simplicity, I am only calculating one wind speed and direction per thermal for now. This could be improved in a later version, when we know there is enough data to fine-grain the selection even more.

How can we calculate wind speed and direction without having access to wind stations and only based on the GPS track? It is not an easy task because the aircraft can turn in any direction and slow down/speed up. Let’s have a look at how a typical thermal with wind looks like:

You can see part of a flight track starting on the lower left and ending on the upper right. There was significant lift and also significant wind drift to the (south-)east. In this case, simply comparing start and end would provide a good estimation of the wind speed and direction.

However, we can not be sure that the pilot follows the wind. Many other scenarios are possible, for example pushing towards the wind while being into several smaller thermal areas. Have a look at this thermal:

Was this change in location triggered by wind drift or by the pilot’s recentering into the core? We can not know by comparing the entry location with the exit location. But we can compute the speed values and direction for each two GPS points. Plotting speed and bearing for another thermal with strong west-wind gives the following:

Here you see speed differences between ~20 km/h and ~60 km/h based on the bearing. The highest speed is ~120° and the lowest ~270°. If a pilot would steer against the wind for a longer time, these values wouldn’t change. We would merely see more points close to the existing ones in a certain area.

The plot above can be improved visually by using polar coordinates:

Here we have the bearing values mapped around a circle and the distance from the middle is the ground speed. If there was no wind at all, this should be a circle around the middle. In this case above, we can estimate a circle and the center gives us wind speed and direction, while the radius of the best fitting circle would be the aircraft airspeed.

The circle can be approximated with some scientific help of Dr. Koch and the mind-blowing scipy.optimize.leastsq function.

Map it on the heatmap

So let’s calculate this for lots of thermals and see how they are affected by which wind:

First of all, you can see a lot more red for the eastern wind than for north(-east). This could be either because the eastern wind is better or because this wind situation is more likely to happen. The heatmap does not reason about the data but the fact is that there are way more thermals appearing with an east- and a west-wind than all the other directions. The database shows how significant that is:

There are way more thermals drifting to the east or west than for any other direction.

When moving closer to a certain area, we can see how frequently they are flown based on the wind. For example, the Wallberg is a well-known west wind mountain:

North- and south-wind show very little thermal action. Even the east-wind does not come close to what happens with west-wind. For Wallberg, this states the obvious for experienced local pilots. However, it might not be obvious for beginners. And it can be helpful to understand which cross country routes work with which wind direction. Does the Baumgartenschneid (across the valley to the north) work with north-wind?

Wrap up

We can now calculate windspeed from GPS fixes and use that data for filtering and grouping. The calculation is compute-intensive and we keep only one value regardless of the height.

Keep in mind that this data is based on wind calculations from the raw thermals. It is not necessarily the same as the overall wind of the day. And there is no statement about turbulence or difficulty. All we can see is that someone found a thermal lift that happened to be in the computed wind drift. We don’t know whether the drift was caused by a valley wind, some lee side rotor, or the overall weather.

Have a look at the Wind based thermal heatmap for yourself.

Related read: Calculating wind speed from the GPS track

Parameters used:

  • Only thermals with a wind speed >= 5 km/h are shown
  • Each thermal has a minimum altitude gain of at least 300 m
  • Wind directions are grouped into [N, NO, O, SO, S, SW, W, NW]
  • Data is based on ~600.000 thermals
  • Public flight database of DHV, mostly from the german community

Technology used:

  • Python with pipenv
  • PyDev
  • Folium, Numpy, Scipy

Paragliding data gems

Category : python shell script web

Paragliding is my beloved hobby and besides offering stunning views and perfect days outside, it also provides a huge amount of flight data to process and play around with. Sites like xc.dhv.de, XContest contain millions of flights from thousands of pilots. These documented flights are gems of data waiting to be investigated by algorithms.

A recorded flight of 4 hours started in the Stubai valley, going up to 3800 m in height, flying to the Öz valley and back.

Outsiders to this sport tend to believe it is about getting up a mountain and simply glide down from there. This might be true for the first few flights, but it can be so much more than that. People stay in the air for up to 12 hours and cover distances of over 500 km. This is possible for the same reason birds can fly such long distances despite the tiny supply of energy they have in their bodies. It is the thermal activity of the air mass.

We all know the sideway wind on the ground. But the vertical winds can be just as strong as a horizontal breeze. And the upward winds are used for climbing as high as possible (and safe) and then glide to the next thermal.

Many books have been written about which factors are important for upward winds and where/when you can expect them to be optimal. But with lots of flight data at hand, it should be possible to see this by example. They just have to be processed and visualized in the right way. This is what Dr. Maximilian Koch and I worked on over the lockdown period.

Get the data

We started with simple scripts for downloading flight data, including all the geo coordinates. For now, it operates on a limited data set because scraping more than a million flights takes time and the process needs to be evaluated first. We don’t want to spider everything and then figure out that something is missing or handled incorrectly. The following examples are based on ~36.000 flights, with a paraglider or a hang glider.

All the data is processed at first for extracting basic information like the takeoff site, the date, and the pilot’s name. In a second step, all the geo-coordinates are processed for thermal activity. I am using this igc-lib for parsing the flights. Judging from some examples, it is not perfect but works well enough on big data sets.

It evaluates the geo-coordinates and extracts thermals and glides from the flights. This information is stored in a local database. Each thermal contains information about the start and end time, the height at both times, the vertical velocity, etc.

Thermal heatmap

Thermal heatmap based on ~36.000 paragliding and hang gliding flights

What can we use that data for? An obvious use case is to show all the thermals on a map, as in the image above. You can see the typical flying routes marked in red. Areas with lots of data appear completely colored, but this is only because the zoom level is so high.

Thermal heatmap zoomed in. The dots appear close to peaks, only few are above the valleys.

When zooming in further, we can see in more detail where to expect upward winds. As the theory states, it is mostly above the peaks and ridges. So far this is similar to other work in the same direction. For example, the paragliding maps show a similar pattern.

Our basic heatmap can be seen and navigated here.

Time based activity

One factor that changes thermal activity is the time of the day. Depending on the sunlight, different areas of the ground are heated and generate a warm airflow.

Therefore, it is interesting to see how thermal activity changes during the day. Here is a time based heatmap in which you can step through all thermals of the day on an hourly base.

The hourly information can be useful to see when and where it is possible to launch in the morning. For long-distance flights, you need to start as early as possible and gain height.

As mentioned before, the data is not complete and it will always have a bias. There are certain areas and routes which pilots typically take. When stepping through the hours, these routes are made visible. Those routes are often used because they are the best possible options. Therefore, even the limited data set is useful for flight planning as they show the most relevant information on the map.

Fun facts

In this dataset of ~36.000 flights, the strongest thermals from start to end have an average climb rate of 7 m/s. There are some outliers showing more than 10 m/s, but all of them can be explained by hardware issues at the start of a flight. These 7 m/s are an average for the whole time of climbing, so there were seconds of a stronger climb as well.

The maximum height gain is 2282 m to an exit height of over 4100 m. This height was reached with an uplift of just below 2 m/s. Within the 36.000 flights, more than 200.000 thermals with a height gain of at least 100 m are reported. So there is an average of 5,5 such thermals per flight. The number is probably much higher in summer than in winter.

Next steps

The data set is still very limited as mentioned a few times. So one goal is to improve the data and download more flights from the respective sites.

There are other interesting questions to ask and possibly answer:

  • Can we calculate the wind conditions out of the tracks?
  • How does it affect thermal activity?
  • Is there something interesting to see in the glides? Can we figure out the best gliding routes, maybe based on other factors?
  • Is it possible to make the data more relevant during a flight? For example, a pilot would only be interested in thermal data that can make him reach a higher position. At the same time, only thermals that can be reached from the current position are of interest.

Can you think of more interesting questions that might be answered by that data? Send me an email and if it is easily possible, I can have a look into it.


A Pypy Runtime for AWS Lambda

Category : AWS python

Python is a great language for the on-demand style of Lambda, where startup time matters. In terms of execution speed, there are better choices available. Where computational performance matters one improvement is to use Pypy, the Python interpreter with a JIT compiler. It can execute the same code much faster. There is just a slight penalty in startup time compared to CPython.

I was curious how Python and Pypy would compare on AWS Lambda. As Amazon announced recently, it is now possible to provide your own Runtime for Lambdas.

Creating a Custom AWS Lambda Runtime

Before you can start with creating your own runtime you should have a simple Lambda function. I recommend you start by creating a serverless application. This way you not only get a plain Lambda but also an API Gateway and Cloudwatch logs set up. And it is much quicker to edit code, iterate and put it to version control.

Creating a new runtime is based on a shell script you have to provide. This script will do initialization work, call an interface for requesting the next task/incoming request, dispatch it to whatever runtime you are providing and respond to another interface with either a success or an error message.

Starting from the example is easy. You can quickly set up a test project using serverless that will execute the example bootstrap code. The example runs in an endless loop, working on one task in each iteration. AWS must be starting/killing this loop based on how many tasks are waiting for execution and probably some other factors.

Pypy does not run out of the box. The interpreter has to work on Amazons Linux environment. Unfortunately, downloading a compiled binary didn’t just work for me. And I couldn’t find a version specifically for the Amazon Linux. The problem is that a libbz2 library was not available. In fact, it is available in the environment but Pypy does not find it. The recommended solution to create a symbolic link to the library is also not an option, because the environment is read-only (except for /tmp/). To not spend too much time on this I fired up an EC2 instance with the Amazon default image and copied that library next to the Pypy interpreter into my package.

To create a first “Hello World from Pypy” application running you need to call Pypy from within the shell script and send the response back to the Runtime Interface. There is no error handling yet and starting a new Pypy process on every request is far from optimal, but this is already a working solution.

A better way is to move the processing loop from shell script into Pypy. This way there already is a running process, all imports have been done and if parts of the code use initializers or caching this state will be kept for the next request. The bootstrap script looks a lot simpler now:

The logic is now located in Python code and run by Pypy. With the custom runtime code in place, we can switch between a standard Python3.7 runtime and our own in the AWS Lambda web interface:

Comparing Pypy and Python3.7

So how does the simple Pypy runtime compare to the default Python3.7 implementation? Let’s create an example where the Lambda has to use its CPU. I wrote a simple one-liner to calculate prime numbers from 2 to 200000:

For sure there are better algorithms to do the same thing, but it serves the purpose well. Calculating the prime numbers takes considerably longer with CPython than with Pypy. On my machine, it takes around 1 second with Pypy and 3 seconds with CPython.

In an AWS Lambda environment, both runtimes are executing the same handler.py and calculate prime numbers. This is how long they take to run the code:

Calculating the primes takes more than 5 seconds when executed with python3.7 but only slightly above 2 seconds with Pypy. We have found a case where Pypy is a lot better than CPython. This was my hope when starting to build the runtime.

But, as you can see in the first request, the Pypy runtime takes a long time to initialize. This is logged in CloudWatch as “Init Duration: 1641.69 ms”. In this test scenario, it does not matter because a request takes many seconds to finish. With its better computational performance, the Pypy runtime still comes in first. In a more typical scenario, this Init Duration will be much more important. And this brings us to the downsides of this approach.

Downsides of the Custom Runtime

The initialization phase takes way too long. It is not visible what exactly happens during that time. But the bulky size of the code package will most likely be part of it.

Let’s rerun the same test as before but without the heavy calculation and ignore the Init Duration:

The execution time for a “Hello World” application is higher than with Python. I don’t understand why this is the case. Monitoring Pypy runtime_interface gives me sub-millisecond times for what my code executes. Still, the Lambda execution Duration is reported to be somewhere between 10 and 30 ms. In contrast, executing the function with Python3.7 gives Durations close to 1 ms with only a few spikes. This should be more or less equal. There either is a problem in my implementation or in how AWS handles a Custom Runtime. If you have an idea what goes wrong here please add a comment. In any way, this diagram is much closer to real-world usage. And Python is faster here.

Also, the deployment package is big. Nearly 30 MB are uploaded to S3, even though there is hardly any function code inside. For many cases, this is going to be a showstopper. I believe the package size can still be reduced by specifying in more detail which Pypy files are necessary. If Amazon ever considers this as a default choice it would solve the issue, because then you would not have to upload the interpreter within your package.

Wrap-up

Running Pypy on AWS Lambda as a custom runtime is possible and not very complicated. There is a clear advantage over CPython when it comes to long-running computations. Packaging the whole interpreter bloats up your Lambda package and increases your initial startup time. Typically, being lightweight and having a quick startup is more important than raw computational speed. Therefore, I can only recommend this approach for exceptional cases.

If Amazon decides to provide Pypy as a default Runtime, this could be different. You would not have to bundle the interpreter and the startup time might become a lot better than now while the computational advantage of Pypy will still be there.

You can find all the code in my Github repository.


Serverless on AWS Lambda

Category : Android

Serverless computing is a cloud-computing execution model in which the cloud provider dynamically manages the allocation of machine resources.

At my current project, I had the freedom to create a new service a service from scratch and to relatively freely choose the technology stack. The only given was that it should either run inside the old datacenter or on AWS.

Serverless

Having experience with administrating root servers in the past, I also know how easy it is to fail at that task. It takes a lot of effort to keep a system up to date, adjust the configuration for changing requirements and handle hardware failures. With serverless services, like AWS Lambda, all this is abstracted away and handled by the Cloud provider. It auto-scales your service on demand when the load increases. Even better, AWS Lambda will only charge you for the number of requests. If you don’t use a service the only thing you pay for is the zip file stored on S3.

Compared to a typical service based on virtual machines or containers, these benefits are huge. However, they also come with a downside. Most notably is the performance impact. AWS will kill idle Lambda instances and spin them up on demand. This means the service has to be loaded from S3, extracted and started when the first request in a longer period of time comes in. There are huge differences in the startup time of certain technologies. Node.js and Python are among the fast end, while Java with Spring takes a lot longer to start.

Caching

As instances can be stopped at any time, this also affects caching strategies. There is a simple cache in API Gateway which caches full responses. It can be activated by checking a checkmark and setting the maximum size. However, many times you don’t want to cache the complete response, but pieces of information required to compute it based on the input values.

It is possible to cache data inside a Lambda, but there is no guarantee about how long it will be available. This depends on the load, access patterns and probably how many resources Amazon currently needs. If you want more control you have to use an additional service like Elasticache. However, this part cannot be started on demand. As a central instance which has to be available and quickly serves requests, the cache has to be up and running all the time and you will be charged even when it is not used.

In the case of my service, the load is high enough to make sure there is one Lambda instance running all of the time. It is not always the same instance, but changes are rare enough to provide a good amount of cache hits.

Wrap up

Going serverless was a great choice and is superior to manually running a service in a container, VM or even the bare metal in many ways. It provides the option of scaling to nearly an infinite amount of requests (whatever Amazon can handle) without the hassle of configuring complex auto-scaling strategies.


Make your website or web-app offline available

Category : Android web

Android developers vs. web

One great advantage of native apps over web apps is that they don’t depend on an online connection to start. Therefore the startup is fast and you can use them in no-network conditions. Just, web apps can also have this advantage when done right.

If you look at a website like Google Docs, you notice that it appears even when you are offline (given that you have visited the same page before). It is even possible to edit files while offline. You can achieve the same with an HTML5 feature called the Offline Application Cache.

Use the Offline Application Cache

While keeping a state locally and syncing requires more effort, making your web(site/app) offline available is easy. You just have to use the Offline Application Cache. This is a single file with information about everything the client should keep in its local cache.

At first you create a file called cache.manifest with the following content:

CACHE MANIFEST
# Cache version #1

CACHE:
/index.html
/css/style.css
/images/header.png
/images/footer.png
/images/background_portrait.png

Change the resources below CACHE: to the required files of your project. Keep in mind that these resources are not requested again, even if they changed. If you want them to be re-downloaded, you need to change the cache manifest itself. This is the reason for the version counter. Increase it by one to make the clients refresh all resources.

The next step is to add it to your site html tag:

<html manifest="cache.manifest" type="text/cache-manifest">

Save, refresh the website on your client and you now have an offline available app. You can test the Offline Application Cache by switching off your server or internet connection and refreshing again. It will reload despite having no connection.

Network and Fallback

In case you have more dynamic content there are two sections you can use in the Offline Application Cache file called NETWORK and FALLBACK:

# Download default_state.xml to local cache
CACHE:
/default_state.xml

# Resources that have to be downloaded from a server
NETWORK:
/current_state.xml

# default_state.xml is used when current_state.xml is not available
FALLBACK:
/current_state.xml /default_state.xml

In this case, current_state.xml requires an online connection and can not be cached. default_state.xml will be added to the cache and used as a fallback when current_state.xml could not be downloaded.

For example, instead of your state data you can put a “state could not be retrieved” message into the default_state.

Wrap up

It is simple to make your web app offline capable. Most of the hybrid- or web-apps I see on the market fail to work without an online connection. It is a pity, because there is so little work necessary to greatly enhance the user experience.

Dynamic state is a different thing, though. Keeping and syncing state is a hard topic, whether native or on the web. While not simple, it is possible with HTML5 Local Storage.

Still, showing your own website with a message is much better than the default browser error. If you have web parts in your application that come from a remote server, be sure to use the Offline Application Cache, at least for the front page and your resources.

A Beginner’s Guide to Using the Application Cache

HTML Standard Application caches


Native- and Mobile Web Apps

Category : Android java

As a native apps developer since 2008, I have seen time and time again the wish to develop everything with one toolset. Most often, the toolset of choice is the web, or HTML, CSS and JavaScript. In all cases I have experienced, this was a wrong move and users never liked the web app. Therefore, I say that native apps are superior to what a web app can achieve for most mobile use-cases.

Web vs. Native

However, that does not mean the web technology is a bad platform. It was just designed for different devices and use cases. For example, it is incredibly easy to show rich documents with web technology. This would be a hard task in native development and quite often a web view is embedded into native for that reason. Over the last decade(s), many great use cases have been enabled in web technology that were unthinkable a few years ago. Think of Google Maps, Live chats on websites, Youtube, 3D-Content, etc…

But there are important mobile aspects where the web still fails to deliver. First and foremost there is no layout mechanism that matches Androids way of developing for multiple screens. Unlike on a desktop, where pixel density has been relatively stable for a decade, on mobile devices it can be completely different. A normal phone screen might have a resolution of more than full-HD, while a 10″ tablet still has a HD-ready resolution. A single pixel is much much smaller on the phone than on the tablet. If you are designing your website with pixel sizes, your graphics might have the right size on either of them, but not on the other.

The normal solution in the web world is to use percentages of the screen size instead of hard coded pixels. This solves the problem above, but introduces a new one. What is the right image to show if it stretches to some percentage of the screen? It should be big enough to use the great phone screen, but not bigger than necessary to save bandwidth and keep page loading times low.

A related problem is to size a button correctly. You want to have the size of a button approximately match the size of a fingertip, so it can be pressed easily without taking too much screen space. This works neither with pixels, nor with percentages of a screen.

If we want to solve this problem, we first have to understand that this is not a one-dimensional problem. Size is not the only parameter we have. A users device can be on any position in these two dimensions: [small – large] X [low dpi – high dpi]. Android uses resource folders for both dimensions, where each device chooses its correct format. There is an explanation in the designing for multiple screens documentation.

Another thing is integration into the system. How do you create an Intent? How do you set an alarm? What about receiving Push Messages? If your app doesn’t need these features that is fine. But working around the limitations of a web container with native bridges enforces you to maintain both native for several platforms and web content.

And finally the promise of “develop once, it works everywhere” is simply not true for web technology. Different browsers behave differently and websites are cluttered with special case handling for certain clients. You still have to test and maintain how your web app looks on iPhone, iPad, several Android phones, browsers and other platforms you care about. And I am not yet talking about a platform-specific look and feel.

The right tool for the job

However, there are certainly cases when a web application makes a lot of sense. If you are mainly mirroring website content, it probably is a good idea to reuse much of your existing website. Even I, as a strong native promoter, have chosen to develop a web application in my last project.

The project was about using a tablet for controlling hardware, in this case to control several lights and display videos in a car prototype. Besides the pain points explained above a web application has its own strengths like being available without installing.

Another reason for not using a native approach was the server part. If there is a (web) server anyways, it can as well serve web content instead of just data and instructions. A native app would require one more layer on top of everything.

Most drawbacks mentioned above don’t apply for this case. The system includes one specific set of tablets, so it doesn’t have to adapt to multiple screens. The simple layout does not depend much on resolution and dpi, because most of it is text and vector content. The only image used is the background. And there is no system integration necessary.

Not just a boring website

Above is a simplified version of the app, that I use for development. It is connected to a 4-channel LED (red, green, blue, white) to generate different colors. With the Administratormodus, you can modify color values of each channel.

To make it feel more like a native app, it has a homescreen button and no url bar on top. The application uses the whole available screen, except for the notification bar and the navigation buttons. It also features a splash screen while loading. This is easy to achieve, but goes a long way in making your web app feel like it belongs to the platform. Read more about it in Making Fullscreen Experiences.

Wrap-up

For this project, I believe it was the right approach. The customer is happy with a lightweight and clean solution. But it was a special case in a well defined environment. For most consumer apps I still recommend native development.


This website switched to HTTPS

Category : web

My webhoster 1blu has finally added the possibility to get a free LetsEncrypt SSL certificate for this website. So www.ulrich-scheller.de is now available via HTTPS.

Let’s Encrypt

With LetsEncrypt getting an SSL certificate is free. There are many reasons for using HTTPS but hardly one against it. This is especially true whenever you enter credentials for logging into your website. In case of this blog and profile page it didn’t matter too much, because the only person with a login is myself. All the content was and is available publicly, so there is no need for a high security level. However, HTTPS is also one of many ranking signals for search engines. And it is just good practice to use encryption wherever it doesn’t hurt.

Simple but not that simple

Moving your website over to HTTPS is very easy. Usually, you just tell your webhoster to get and install a certificate. Or, in case you are hosting yourself, you have to do these steps on your own.

However, keep in mind that this doesn’t magically fix all the problems. After you got your installed certificate and HTTPS running, there is more to check:

  • Links on your website should not point to HTTP urls. Make sure any internal link does not start with “http://”. The best way to fix it is not to change http to https, but instead leave out the complete protocol & domain part. This way links will still work, even if you would move back to http or change your domain. You can use www.whynopadlock.com for checking the links on your site.
  • Change the url of your Page in WordPress to https://
  • When another blog linked to your page, this link will continue to point to the HTTP version. For search engines, your HTTP website is different from the HTTPS site. To prevent losing a good ranking, make sure to redirect from your HTTP to the HTTPS version. This also ensures your visitors will use the secure protocol from now on.

There can be much more, depending on your setup. Read through this detailed guide to see what else might apply in your case. WordPress and a professional web hosting simplifies much of that work. However, most of the time it also limits how far you can go. For my page, I had to wait multiple years until Let’s Encrypt became available. Features like HTTP2 support are still not available.

Now I need to go ahead an fix the links on this site. If you see anything else that is breaking encryption on my site, please tell me in the comments.


Virtual Reality Experience with Google Daydream VR

Category : Android java

Virtual Reality is a hot topic these days. A few weeks ago I had the opportunity to test an Oculus Rift with Touch Controllers. PlayStation VR and HTC Vive have also been released lately. Android Developers like me have their Cardboards, which are a very low-cost option.

Daydream VR

With the release of their Pixel devices, Google announced the Daydream VR. Similar to the Cardboard, you place your mobile phone in the VR headset and don’t need additional high-end hardware. For 70€ it is still a low-cost solution, if you don’t factor in the expensive phones.

My first attempt at trying Daydream VR unfortunately was not successful. I got the small Pixel phone, which worked flawlessly except when being used in the Daydream VR headset. It had regular reboots, a problem many others around the web have as well. And even worse, it had extreme visual drift as you can see in the video below.

It is hard to tell how bad that visual drift is. Your vision turning around while your body tells you there is no change in orientation makes you feel sick within a minute.

So after playing around an making a factory reset, I decided to return the device and get the Pixel XL instead. Turned out this was a good choice. With the Pixel XL everything works flawlessly. Head tracking has no noticeable delay and the touch controller works great.

Experience

Compared to a Cardboard this setup is a great improvement. A Cardboard only has a single button for user interaction. The touch controller gives navigating a whole new dimension. In games it is used as a magic stick, for controlling a steering wheel or tilting a playground to move a ball around. Every game seems to have its own way of navigating around. I believe we will see a lot more navigation styles before a few will crystallize as standard.

While Daydream with the controller is much better than before, you also see what is still missing. Turning your head around works great, but moving is not possible at all. In a VR world like Fantastic Beasts I want to move around and look at the beasts from all sides. In most of the applications this is not possible.

Graphics are pretty good with the right game/application. The detail level is impressively close to an Oculus Rift. However, in both VR systems you recognize single pixels. Even a resolution of 2560×1440 pixels is not much in VR mode, because it has to split for two eyes and fill the whole viewport. But every current VR system has this problem.


Personal Jenkins Server with Docker

Nowadays, every software development team should have a continuous integration server like Jenkins. This is as true for Android developers as for any other platform. It makes sure the current source code compiles and all the tests succeed, so nobody is blocked by a broken build. A CI also forces you to have a build in one step and to perform it regularly, usually on every commit.

Most often a continuous integration platform is used for development teams. However, it also gives many benefits to single developers. Multiple times I had the problem that old projects would not compile or work after switching to a new computer. Also, I often forgot to run all tests if they seemed unrelated to my code changes. And last but not least there is the deployment-pain when the last time was long ago.

Where to host?

A Jenkins build server would solve all these problems. But I didn’t want to spend a lot of money on hosting, because it is for private, closed-source projects with no profit. And running Jenkins on my development machine does not help, because it is still the same environment as in my IDE. My first idea was to use a RaspberryPi server. While this is not a fast computer, it runs on very little power and would have more than enough time for builds.

After playing around with it, I discarded that option again. Jenkins on a RaspberryPi works – but it is not an x86 device. So if you are not using the Android SDK, a Raspberry might be an option. For me it is not, because the Android SDK is not available on ARM systems.

Another low-cost option is VirtualBox. You set up a linux server inside a virtual machine and host Jenkins in there. Although the virtual machine is hosted locally, it can easily be transferred to anywhere if necessary. I had this option in mind for a while. But I didn’t like the overhead VirtualBox brings into it.

So when Docker announced their MacOS release, I was eager to try it.

Set up Jenkins

Installing Docker is an easy task. There is a good Getting Started guide on their website. You will learn using the docker command and how to run containers. It turns out there already is a working Docker container with an installed Jenkins. Run it with

docker run -p 8080:8080 -p 50000:50000 jenkins

This starts up a new container with Jenkins running on port 8080. You will see the following website:

Docker Jenkins first login

For getting the initialAdminPassword you have to access the running container.

Ulrichs-MBP:~ uscheller$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                              NAMES
82812e782a95        jenkins             "/bin/tini -- /usr/lo"   7 minutes ago       Up 7 minutes        0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp   stupefied_bell

will show you all running containers. Look up the name of the container, in my case it is stupefied_bell. Edit the following command with your container name:

docker cp stupefied_bell:/var/jenkins_home/secrets/initialAdminPassword . && cat initialAdminPassword && rm initialAdminPassword

This prints the initialAdminPassword for logging in. If you enter it on the website, you can continue the setup process and have a running Jenkins in Docker.

Useful commands

You can stop your running machine with

docker stop stupefied_bell

When you want to start it again, don’t use the docker run command from above. It would create a new instance. Instead, simply use

docker start stupefied_bell

Get a shell on this container as user jenkins

docker exec -it stupefied_bell /bin/bash

Get a shell on this container as root

docker exec -u 0 -it stupefied_bell /bin/bash

Install Android SDK from shell

Docker Speed

Performance-wise this setup is great. The docker container starts and stops within seconds. The only delay is caused by Jenkins startup-time, showing the please-wait message. My Jenkins is not yet overloaded with plugins and jobs so it takes less than 15 seconds for everything to be available.

I tested the build-performance with my Android project Laska:

gradle clean build test

The time for executing this command on my native machine is around 2 minutes 34 seconds. The Docker container takes only 1 minute 54 seconds. In multiple runs the outcome was always in favor of Docker. I can not explain why this happens, as native should be the fastest. It might be a configuration setting on my machine.

Wrap up

Using Docker to host Jenkins is a great solution for solo developers. It is easy to set up, especially with a pre-packaged Jenkins container, and can be transferred to a dedicated server if necessary. Build speed is the same as in a native environment.


Should you go for self-employment?

Category : career

I am now 9 months into my career as a freelance consultant and would like to share my experiences with it. The question if you should make that switch too depends on many factors and can not be answered in general. Much of it is about yourself rather than the outside world.

Did it work out?

This is the important question to ask when someone created a company. During the last 9 months I had a constant stream of work which was paid much better than all my time before. And this is despite me stepping back from managing teams of 20 developers to writing software myself.

Because a developer has less responsibilities and less meetings than a team lead, it is much easier to take a day off. This is very valuable to me as a paraglider, as it strongly depends on weather conditions. Last year I had several incredible (for myself) flights and great experiences. Before, I was trying to achieve this year after year as an employee, but there was always some important meeting or another reason preventing it from happening.

On the other hand I sometimes miss being important. I enjoyed creating teams of developers and optimizing the way we work on many layers. I also enjoyed being responsible for products with millions of active users. However, this is not typical for most employees but rather a special case of my previous role. And the fun was declining more and more at the end in favor of company politics, pushing Excel-Boxes and taking part in software-design-committees.

So overall, yes, it worked out very well. I am much more relaxed, more happy and being self-employed gave me the biggest income-boost I had by far.

Is it hard?

This depends a lot on what you already know and what type of person you are. Obviously there are some hard parts to creating your own company. The hardest is that you will be responsible for everything. You have to find customers yourself, sell yourself as valuable to their project, handle all the paperwork and taxes and never break any law.

Many of us hate responsibility. I am not talking about being responsible for finding a restaurant for dinner tonight. I am talking about make a big mistake and you go to jail-responsibility. If you are that type of person, you will probably find it difficult.

I was lucky to be responsible for several teams of developers in my last job. So my mind was already prepared for having lots of responsibility. Additionally, I have been self-employed part-time during university. So when I tell you it is not hard at all, take this into consideration. For many developers this might be a bigger change.

Keep your eyes open

In Germany, there is a subsidy for founders coming out of unemployment called Gründungszuschuss. I urge anyone interested to further look into this, as it basically is free money for nothing in return. The Gründungszuschuss is an incentive for unemployed people to start their own business. I will not go into more detail here, but it is pretty easy to become temporarily unemployed.

I knew this subsidy existed but I never believed I would be eligible for it. I needed my tax advisor telling me that in fact I was. You probably need the same in other cases. Talk to people in a similar situation and read about it. But most importantly, once you choose to become self-employed, take action.

Next steps

So currently I am really happy with my situation. I have a lot of personal freedom, 3 days of home office per week and a good pay.

The next optimization I wish to make is breaking the money-for-time relation. But this is another topic for another post at another time 🙂

Should you do the same? That depends on your own situation, but in many cases it is a big win. Drop me a note if you are considering it.


Recent Posts

Tweets