Category Archives: Android

  • 0

Make your website or web-app offline available

Category : Android , web

Android developers vs. web

One great advantage of native apps over web apps is that they don’t depend on an online connection to start. Therefore the startup is fast and you can use them in no-network conditions. Just, web apps can also have this advantage when done right.

If you look at a website like Google Docs, you notice that it appears even when you are offline (given that you have visited the same page before). It is even possible to edit files while offline. You can achieve the same with an HTML5 feature called the Offline Application Cache.

Use the Offline Application Cache

While keeping a state locally and syncing requires more effort, making your web(site/app) offline available is easy. You just have to use the Offline Application Cache. This is a single file with information about everything the client should keep in its local cache.

At first you create a file called cache.manifest with the following content:

Change the resources below CACHE: to the required files of your project. Keep in mind that these resources are not requested again, even if they changed. If you want them to be re-downloaded, you need to change the cache manifest itself. This is the reason for the version counter. Increase it by one to make the clients refresh all resources.

The next step is to add it to your site html tag:

Save, refresh the website on your client and you now have an offline available app. You can test the Offline Application Cache by switching off your server or internet connection and refreshing again. It will reload despite having no connection.

Network and Fallback

In case you have more dynamic content there are two sections you can use in the Offline Application Cache file called NETWORK and FALLBACK:

In this case, current_state.xml requires an online connection and can not be cached. default_state.xml will be added to the cache and used as a fallback when current_state.xml could not be downloaded.

For example, instead of your state data you can put a “state could not be retrieved” message into the default_state.

Wrap up

It is simple to make your web app offline capable. Most of the hybrid- or web-apps I see on the market fail to work without an online connection. It is a pity, because there is so little work necessary to greatly enhance the user experience.

Dynamic state is a different thing, though. Keeping and syncing state is a hard topic, whether native or on the web. While not simple, it is possible with HTML5 Local Storage.

Still, showing your own website with a message is much better than the default browser error. If you have web parts in your application that come from a remote server, be sure to use the Offline Application Cache, at least for the front page and your resources.

A Beginner’s Guide to Using the Application Cache

HTML Standard Application caches


  • 0

Native- and Mobile Web Apps

Category : Android , java

As a native apps developer since 2008, I have seen time and time again the wish to develop everything with one toolset. Most often, the toolset of choice is the web, or HTML, CSS and JavaScript. In all cases I have experienced, this was a wrong move and users never liked the web app. Therefore, I say that native apps are superior to what a web app can achieve for most mobile use-cases.

Web vs. Native

However, that does not mean the web technology is a bad platform. It was just designed for different devices and use cases. For example, it is incredibly easy to show rich documents with web technology. This would be a hard task in native development and quite often a web view is embedded into native for that reason. Over the last decade(s), many great use cases have been enabled in web technology that were unthinkable a few years ago. Think of Google Maps, Live chats on websites, Youtube, 3D-Content, etc…

But there are important mobile aspects where the web still fails to deliver. First and foremost there is no layout mechanism that matches Androids way of developing for multiple screens. Unlike on a desktop, where pixel density has been relatively stable for a decade, on mobile devices it can be completely different. A normal phone screen might have a resolution of more than full-HD, while a 10″ tablet still has a HD-ready resolution. A single pixel is much much smaller on the phone than on the tablet. If you are designing your website with pixel sizes, your graphics might have the right size on either of them, but not on the other.

The normal solution in the web world is to use percentages of the screen size instead of hard coded pixels. This solves the problem above, but introduces a new one. What is the right image to show if it stretches to some percentage of the screen? It should be big enough to use the great phone screen, but not bigger than necessary to save bandwidth and keep page loading times low.

A related problem is to size a button correctly. You want to have the size of a button approximately match the size of a fingertip, so it can be pressed easily without taking too much screen space. This works neither with pixels, nor with percentages of a screen.

If we want to solve this problem, we first have to understand that this is not a one-dimensional problem. Size is not the only parameter we have. A users device can be on any position in these two dimensions: [small – large] X [low dpi – high dpi]. Android uses resource folders for both dimensions, where each device chooses its correct format. There is an explanation in the designing for multiple screens documentation.

Another thing is integration into the system. How do you create an Intent? How do you set an alarm? What about receiving Push Messages? If your app doesn’t need these features that is fine. But working around the limitations of a web container with native bridges enforces you to maintain both native for several platforms and web content.

And finally the promise of “develop once, it works everywhere” is simply not true for web technology. Different browsers behave differently and websites are cluttered with special case handling for certain clients. You still have to test and maintain how your web app looks on iPhone, iPad, several Android phones, browsers and other platforms you care about. And I am not yet talking about a platform-specific look and feel.

The right tool for the job

However, there are certainly cases when a web application makes a lot of sense. If you are mainly mirroring website content, it probably is a good idea to reuse much of your existing website. Even I, as a strong native promoter, have chosen to develop a web application in my last project.

The project was about using a tablet for controlling hardware, in this case to control several lights and display videos in a car prototype. Besides the pain points explained above a web application has its own strengths like being available without installing.

Another reason for not using a native approach was the server part. If there is a (web) server anyways, it can as well serve web content instead of just data and instructions. A native app would require one more layer on top of everything.

Most drawbacks mentioned above don’t apply for this case. The system includes one specific set of tablets, so it doesn’t have to adapt to multiple screens. The simple layout does not depend much on resolution and dpi, because most of it is text and vector content. The only image used is the background. And there is no system integration necessary.

Not just a boring website

Above is a simplified version of the app, that I use for development. It is connected to a 4-channel LED (red, green, blue, white) to generate different colors. With the Administratormodus, you can modify color values of each channel.

To make it feel more like a native app, it has a homescreen button and no url bar on top. The application uses the whole available screen, except for the notification bar and the navigation buttons. It also features a splash screen while loading. This is easy to achieve, but goes a long way in making your web app feel like it belongs to the platform. Read more about it in Making Fullscreen Experiences.

Wrap-up

For this project, I believe it was the right approach. The customer is happy with a lightweight and clean solution. But it was a special case in a well defined environment. For most consumer apps I still recommend native development.


  • 0

Virtual Reality Experience with Google Daydream VR

Category : Android , java

Virtual Reality is a hot topic these days. A few weeks ago I had the opportunity to test an Oculus Rift with Touch Controllers. PlayStation VR and HTC Vive have also been released lately. Android Developers like me have their Cardboards, which are a very low-cost option.

Daydream VR

With the release of their Pixel devices, Google announced the Daydream VR. Similar to the Cardboard, you place your mobile phone in the VR headset and don’t need additional high-end hardware. For 70€ it is still a low-cost solution, if you don’t factor in the expensive phones.

My first attempt at trying Daydream VR unfortunately was not successful. I got the small Pixel phone, which worked flawlessly except when being used in the Daydream VR headset. It had regular reboots, a problem many others around the web have as well. And even worse, it had extreme visual drift as you can see in the video below.

It is hard to tell how bad that visual drift is. Your vision turning around while your body tells you there is no change in orientation makes you feel sick within a minute.

So after playing around an making a factory reset, I decided to return the device and get the Pixel XL instead. Turned out this was a good choice. With the Pixel XL everything works flawlessly. Head tracking has no noticeable delay and the touch controller works great.

Experience

Compared to a Cardboard this setup is a great improvement. A Cardboard only has a single button for user interaction. The touch controller gives navigating a whole new dimension. In games it is used as a magic stick, for controlling a steering wheel or tilting a playground to move a ball around. Every game seems to have its own way of navigating around. I believe we will see a lot more navigation styles before a few will crystallize as standard.

While Daydream with the controller is much better than before, you also see what is still missing. Turning your head around works great, but moving is not possible at all. In a VR world like Fantastic Beasts I want to move around and look at the beasts from all sides. In most of the applications this is not possible.

Graphics are pretty good with the right game/application. The detail level is impressively close to an Oculus Rift. However, in both VR systems you recognize single pixels. Even a resolution of 2560×1440 pixels is not much in VR mode, because it has to split for two eyes and fill the whole viewport. But every current VR system has this problem.


  • 0

Personal Jenkins Server with Docker

Nowadays, every software development team should have a continuous integration server like Jenkins. This is as true for Android developers as for any other platform. It makes sure the current source code compiles and all the tests succeed, so nobody is blocked by a broken build. A CI also forces you to have a build in one step and to perform it regularly, usually on every commit.

Most often a continuous integration platform is used for development teams. However, it also gives many benefits to single developers. Multiple times I had the problem that old projects would not compile or work after switching to a new computer. Also, I often forgot to run all tests if they seemed unrelated to my code changes. And last but not least there is the deployment-pain when the last time was long ago.

Where to host?

A Jenkins build server would solve all these problems. But I didn’t want to spend a lot of money on hosting, because it is for private, closed-source projects with no profit. And running Jenkins on my development machine does not help, because it is still the same environment as in my IDE. My first idea was to use a RaspberryPi server. While this is not a fast computer, it runs on very little power and would have more than enough time for builds.

After playing around with it, I discarded that option again. Jenkins on a RaspberryPi works – but it is not an x86 device. So if you are not using the Android SDK, a Raspberry might be an option. For me it is not, because the Android SDK is not available on ARM systems.

Another low-cost option is VirtualBox. You set up a linux server inside a virtual machine and host Jenkins in there. Although the virtual machine is hosted locally, it can easily be transferred to anywhere if necessary. I had this option in mind for a while. But I didn’t like the overhead VirtualBox brings into it.

So when Docker announced their MacOS release, I was eager to try it.

Set up Jenkins

Installing Docker is an easy task. There is a good Getting Started guide on their website. You will learn using the docker command and how to run containers. It turns out there already is a working Docker container with an installed Jenkins. Run it with

This starts up a new container with Jenkins running on port 8080. You will see the following website:

Docker Jenkins first login

For getting the initialAdminPassword you have to access the running container.

will show you all running containers. Look up the name of the container, in my case it is stupefied_bell. Edit the following command with your container name:

This prints the initialAdminPassword for logging in. If you enter it on the website, you can continue the setup process and have a running Jenkins in Docker.

Useful commands

You can stop your running machine with

When you want to start it again, don’t use the docker run command from above. It would create a new instance. Instead, simply use

Get a shell on this container as user jenkins

Get a shell on this container as root

Install Android SDK from shell

Docker Speed

Performance-wise this setup is great. The docker container starts and stops within seconds. The only delay is caused by Jenkins startup-time, showing the please-wait message. My Jenkins is not yet overloaded with plugins and jobs so it takes less than 15 seconds for everything to be available.

I tested the build-performance with my Android project Laska:

The time for executing this command on my native machine is around 2 minutes 34 seconds. The Docker container takes only 1 minute 54 seconds. In multiple runs the outcome was always in favor of Docker. I can not explain why this happens, as native should be the fastest. It might be a configuration setting on my machine.

Wrap up

Using Docker to host Jenkins is a great solution for solo developers. It is easy to set up, especially with a pre-packaged Jenkins container, and can be transferred to a dedicated server if necessary. Build speed is the same as in a native environment.


  • 0

Implement a turn based game AI with optimizations

Category : Android

My last post about implementing a turn based game ai became famous on Reddit/Programming for one day. It looks like many developers are interested in this topic. Some readers, with more knowledge of the topic than myself, have added interesting insight that I would like to share with you here. If you haven’t read the first post, you might want to have a look there first.

Minimax algorithm

The algorithm that I presented is called Minimax and is famous in academia for creating turn based game AIs. It has this name, because one player tries to minimize the getValue() result and another player tries to maximize it.

For improving the AI strength you either have to improve your getValue() function, or increase the size of your decision tree. Usually, getValue() can only be improved to a certain point which is far from perfect. Therefore, at some point you want to focus on increasing the tree size. This means looking more half-moves into the future.

We can increase the tree size simply by using a higher recursionDepth value. It depends on your game, which value is good enough for a strong computer AI. Laska is using a recursionDepth value of 4 for the strong computer player, 2 for medium and only 1 on easy. This is good enough, because I want my users to win most of the time. I want them to enjoy beating a hard computer opponent if they are good players.

If your AI should be more competitive, you need a higher recursionDepth. The problem is, that you will soon have a high runtime complexity and calculation time. If we have a Branching Factor of b and a recursionDepth of d, we have a complexity of O(b^d). So let’s say Laska has 4 possible moves on average and I am using a depth of 4. Then my AI is crunching through 4^4 = 256 situations, calculating and comparing their value. This is not very much and also the reason I never had to optimize.

On the other hand, if you are implementing a chess AI the branching factor is 35. There is much more competition in this game and you might want to have a depth of 8. Then you are looking at 35^8 = 2.251.875.390.625 situations, which obviously is too much.

Alpha-Beta Pruning

In this case you should focus on further optimizations. The most important is called alpha-beta pruning. It will cut off subtrees whenever they can not influence the root decision (making a single half-move) anymore.

When can this happen? Every move could turn around the game instantly. In chess, you could be way behind in everything regarding your getValue() and still make a checkmate within a few moves. So how can you ever discard a whole subtree?

It is true that at some point you have to consider every possible move. However, when we know there is a winning path, we don’t need to figure out if there are more in this subtree. If we have a winning path down the tree, we just start with the first move. This goes both ways. If we have a path that definitely gives us a loss, our opponent will choose exactly that path.  We don’t need to evaluate any other moves from his side.

This is not restricted to winning moves. If we have already found out, that move 1 gives us a minimum value of 5 we can use this knowledge and stop evaluating move 2 once we know it is worse. We know move 2 is worse when our opponent can force it to a value below 5 with optimal moves on his side. This is best described in the following gif from Wikipedia:

alpha-beta pruning

Alpha-Beta Pruning in action

On the left side we have the situation described above. The rows in red mark red moves and the ones in blue stand for blue moves. Blue is minimizing the value, while red is maximizing it. So on the lower left side, when blue has to choose between 5 and 6, we know that he will choose 5. In the next subtree, blue can choose between 7, 4 and something else. We can already stop at 4, because whatever number comes, this is already the preferable subtree for blue. We don’t need the correct value, as long as we still get the optimal move.

Alpha-beta pruning makes a huge difference when the decision tree is large. According to this link, it reduces the number of leaves to the square root. So if we use the previous example with 2.251.875.390.625 leaves, we only have to check 1.500.625 situations. That is still a high number, but not impossible anymore.

Example Code in Laska

So how does the alpha-beta pruning look in a game like Laska?

The bestMove() method still looks much like the old version. It now calls alphaBeta() instead of getValueOfMove(), but both will return the value of a given move. The alphaBeta() value is negated and the method takes parameters for alpha and beta, in this case +/-SOME_HIGH_NUMBER.

Also alphaBeta() didn’t change too much. At first, there is the recursion termination. When depthleft equals 0, we just calculate the current value and return it.

If we are not in a leave node we calculate all possible moves. If there is none, the current player has lost the match and we can return a -1000 here. Otherwise we continue walking through the tree. Just this time we pass parameters for alpha and beta.

Alpha notes the highest value of a move we are interested in and beta the lowest. If the current move has a score higher than beta, we can prune the graph here and stop calculating further, because the other player will not let us make this move. Otherwise, if the score is higher than alpha, we have found a better move and need to save it.

Laska now has an unbeatable level, using a depth of 8

Laska now has an “unbeatable level”, using a depth of 8

Runtime comparison

To see how much the alpha-beta pruning impacts performance, I ran a test case with the old algorithm playing against the new version with increasing depths of the decision tree. In the following diagram you see the time used for playing a complete game (up to 100 moves or until we have a winner).

runtimes of minimax vs alpha-beta pruning

runtimes of minimax vs alpha-beta pruning

As you can see, there is no visible difference for depths up to 5. Alpha-Beta is considerably faster at depth 6 and it totally owns Minimax on depths greater than this.

Further improvements

While this improvement is dramatic for higher depth levels, we can still do much better. Some further improvements are independent from your game, some depend on special knowledge about it and others trade a little accuracy for increasing the depth of your tree.

The next optimization you might want to look into is ordering of moves for improving alpha-beta pruning. How you order your possible moves impacts how well the alpha-beta algorithm prunes the graph. If you start with best moves first, it can cut off more of the tree later on.

There is a great overview of further improvements on StackOverflow. You can dedicate a lot of time to this. However, don’t forget to check if your algorithm is already good enough. Because if the main task is not beating Garry Kasparov, you probably should improve user experience, design, marketing, etc.. first.


  • 1

Implement a turn based game AI (on Android)

Category : Android , java , machine learning

Game AI

Developing a game AI can be as much fun as playing, especially when creating your own computer opponent. I am going to present you a simple pattern that works for nearly all turn based game ai’s, especially where there is a defined set of possible moves. This pattern is powering my own Laska for over 6 years already and I have been using it before for personal 4-Wins and Tic-Tac-Toe games.

For this you don’t need a perfect solution, but something that can win against most human players. It should prevent making obvious mistakes and the strength must be easily adjustable. I will show an algorithm that works for two players, but can easily be extended to more.

Basics

In a turn based match we always have moves made by every player. When saying move, what I actually mean is a Ply, or half-move. That is, the move of only one player. Our game AI will look a defined number of half-moves into the future and find the best possible move.

Your game should have a state which can be classified as good or bad. So in chess, we have a winning situation when the King will be taken out in the next move. This is great for one player, and really bad for the other one. There is also a lot in between. Let’s say one player has more Pawns than the other. This would be a good indicator that he is in a stronger position. In Laska, a winning situation is when the other player has no more possible moves available.

winning situation in laska

The blue player has won. Note that this simple situation is faked and can never happen in the real game.

Algorithm

There is a simple pattern that works for all these round based games. It is based on a decision tree of all possible moves and the classification value of the game situation.

From the starting position in Laska, there are four movers with six possible moves. Every move will lead to a forced jump. After that, there are either two possible jumps or three possible moves, depending on which Pawn was moved at the beginning.

starting position

starting position with four possible moves

If you can attach a value of the game situation to each node in the graph, it will be easy to select the best move. You need to figure out the subtree with the best worst-case situation.

 

For this, all your game has to provide is two methods:

  • getPossibleMovesOfActivePlayer(field), which will return all the possible moves for the active player on a given field.
  • getValue(field), which will return a useful value of the situation. It should be high positive when player 1 has won and high negative when player 2 has won.

With these methods available, you can find out what is the best move:

The method starts with an initial value of Integer.MIN_VALUE. Then it considers all possible moves. If there is only one move available, this is automatically the best option. This is an optimization which gives tremendous speedups when using a high recursionDepth. The recursionDepth indicates the size of our graph. The more we look into the future, the stronger our ai gets.

In all other cases, where we have more than one possible moves, we calculate the value of all of them with the specified recursionDepth. The part with Random().nextInt(2) is included to make the ai less predictable. If you leave out this randomness, the computer will always play the exact same game, if you do so as well.

So what does getValueOfMove() do?

First it creates a clone of the field object for further calculation. On the cloned field (nextField), the given move is applied and the active player changed to the next one. With this new field, we now calculate the value of the following field. The simple case is when recursionDepth is exactly 1. Then we add the field value for the active player and subtract the field value for the other player. In my case, the getValue() method only factors in the Pawns of one player. Therefore it has to be called twice.

The more complex case is when recursionDepth is greater than 1. Then we recursively call the method bestMove() from above with a recursionDepth reduced by one. If there is no possible move and the bestMove is null, then the active player has won the match. So we return 1000, a very high number that can only be reached in a winning situation.

If there is a bestMove, this is what the opponent will choose to do. So for our move, this bestMove is the worst case that can happen to us. Because the value of the bestMove us calculated from the other players point of view, we have to negate it with value = -nextAI.getValueOfMove(bestMove, recursionDepth – 1);

Calculating the value

As mentioned above, you need to provide the getValue() method specific to your game. It receives the game state as input and returns an integer describing how “good” it is for the current player. In the case of Laska it returns values in the range of -1000 to 1000. Most likely there will be no perfect or correct return value. You have to create your own method and fine tune it over time. The better your getValue() method gets, the stronger your AI will be and you don’t need to use a high recursionDepth.

The weakest getValue() method will only return a negative number for a lost match and a positive number for a won match, otherwise zero. In this case you would have to use a recursionDepth that calculates all possible moves until the end of the game. In some cases this might be an option. The perfect getValue() method will already know all possible outcomes and give you perfect values. Your algorithm therefore only has to use a recursionDepth of 1.

Since most of the time it is neither possible to calculate all possible moves until the end nor to create a perfect getValue() method, we have something in between. In Laska the values are calculated like this:

  • For all of my pawns add 10
  • Add 5 for every of my pawn slices below the top, until there is an opponents pawn slice
  • If pawn is an officer add another 20
  • Add 2 if a pawn is on an outer field, instead of in the middle
red_has_value_37

Red has a value of 37, because there is a red pawn (+10), it is an officer (+20), there is another red pawn slice below (+5) and it stands on an outer field (+2)

For other games, think of indicators that obviously help towards winning. If pawns are taken out of the game compare the number of pawns. If you need 4 in a row to win give some value to 3 in a row, at least if there is still space for a 4th.

Unit Testing

AI is a poster child for using Unit Tests. In fact, you will have a hard time if you leave them out. Especially annoying are minor mistakes, that don’t break your algorithm but weaken the ai significantly. I have started without proper testing and ran into that problem far too often. Some years ago I realized the value of testing and can iterate much faster now.

You can easily test all the methods with synthetic input. Does field1 give a higher value than field2? Make sure the best move in field1 for player 1 is X. Whenever you think My AI should make this move now but it is not, create a test case and fix it. Either you will find the mistake, or realize that the AI in fact was correct and you were wrong. This way you will soon have a stable and strong AI.

Another cool and fun usage is to let one version of the game AI play against an improved one. During optimizing the getValue() method, I could only believe a change would make the computer stronger. This was until I created a test that played 100 matches of the new version against the old one. Now there is an evaluation of the strength and I can be sure whether my change is an improvement or just a change.

Optimizations

The algorithm above is not meant for winning a competition or to be perfectly efficient. If you want this, you can start with Peter Norvigs Artificial Intelligence: A Modern approach and read through some current papers. However, the algorithm is a pattern that works for nearly all turn based games and gives decent results, so you can move on creating all the other important aspects.

Some things that can be improved:

  • In every getValueOfMove() the whole field object is cloned. While this wastes some computation time, I found that the algorithm is still fast enough even on mobile phones. The field objects are not too big and cloning them simplifies the following steps. Also this makes it more easy to parallelize computation if that would be necessary.
  • After bestMove has been calculated, the value is calculated by stepping into the same recursion again: value = -nextAI.getValueOfMove(bestMove, recursionDepth – 1);
  • You can tune the randomness in bestMove() by collecting all moves of the same value and make an equally weighted choice between them.

If you can think of more, please tell me in the comments.

Wrap Up

As you can see, implementing a computer opponent in a turn based game is no rocket science. You only need to supply two methods and find the optimal move from the resulting tree as described here.

If you want to see the described algorithm in action, download Laska from the Play Store.


  • 0

Interviewing software engineers

Category : Android

Interviewing basics

When interviewing software engineers, you typically want to answer one simple question: Will they be a good fit for my team? Therefore, your interviewing process should focus on answering this question. Obviously both sides try to answer the same question and so you also want to sell yourself as a great place to work.

Over the last years I was interviewing a lot of software engineers, mostly Android developers, for different positions. At first I always introduce myself and the company, explaining what we do and why it would be great to work with us. HR managers often like to ask the candidate why he or she applied to our company. I try to avoid these type of questions. If you are neither Google, nor Facebook nor Apple, chances are candidates didn’t choose you because of your great products, the famous community relations or because they love to work on the cutting edge. What you should do is to talk about their CV and let them explain some topics in detail.

Let them code

The most important part of an interview is a live coding session. Many engineers don’t like being tasked with a coding challenge. I even had Freelancers refusing to work on it. While I understand the negative bias, in my opinion it is still the best way to evaluate candidates. Interviewing is hard and hiring the wrong people is fatal to your business.

The result of any interview heavily depends on personal taste. and sympathy. This is especially true when you are looking for a skill set you are not 100% familiar with. Engineers with deep knowledge like to see multiple problems in every situation. Others might not know the same limits. This is called the Dunning-Kruger effect.

To prevent this from happening, you have to see applicants do what they are supposed to do: Let them write code. Now at least you have something to compare, not only how many buzz words they have memorized. Keep in mind, you will never be sure to have a top performer just by interviewing. But you can consistently wipe out everyone who just can’t code. Depending on how hard your tasks are and where you set your bar, you will also wipe out applicants who are nervous, get confused with your task or just have a bad day. This is not good. But it is still better than hiring the wrong people.

To reduce applicants nervousness at the beginning, I try to let them feel at home as much as possible. They can use their own laptop, their own IDE and language. And of course I will offer my cooperation in solving the task, by answering questions and telling when they are going the wrong direction.

You will be surprised by how much more you can see than just the resulting source code. It starts with the approach they take. Do they have an idea of the algorithm before starting? Do they ask clarifying questions? Then there is the coding itself. Will they use the IDE like Notepad or do they navigate the code quickly using all famous shortcuts? How to they verify and test their program? All this is more important than the actual solution.

Preparation

Most important part of the preparation is to understand how hard your task is. Only then can you evaluate others based on it. That means you (or one of your engineers) have to solve it in exactly the same setup and time.

A great source of tasks is ProjectEuler. It starts with FizzBuzz and gets slightly more complicated over time. Another great collection of interviewing tasks with solutions is rosettacode. It is tempting to use challenges from there, but don’t forget to solve them before reading the solution.

Final words

Finding the right engineers  for your team is hard. It is tempting to cut corners when you have to staff your team quickly. However, hiring the wrong person is much worse. Not only will you pay for an unproductive developer, but you can severely damage the productivity of others. Talking about NNPP. You could part ways with a non performer after a few months, but at that point it becomes much harder. He or she might have become good friends with the rest of the team including yourself. Firing someone has a great impact on the team morale and you should really limit it to exceptional cases.

On the other hand, if you feel confident about a candidate, then make it your first priority to close the deal. Send updates about everything, telling when to expect the next step. It is not hard to stand out from other companies (at least where I come from) nowadays. A positive feedback goes a long way for making someone want to work for you.


  • 0

Google Play Optimization for Android developers

Category : Android

Currently, I am preparing a presentation about optimizing apps for the Google Play Store. It is an interesting topic and still in its early stages. Unlike SEO for websites, it doesn’t seem to be affected by some page-rank algorithm. Instead the main focus is on keywords and, of course, the ratings of an application.

There is a series of articles about all the optimization topics on droid-blog.net. I highly recommend reading them.

The strong keyword-dependency sometimes leads to unexpected search results. My game laska, for example, is the online strategy game champion (of the world):


  • 8

Multiple targets from one Android source (the better way)

Category : Android , java

Some of you might have read my article Android: Deploying multiple targets from one project. It describes how to create customized versions of the same software and therefore benefit from multiple apps with the same featureset. That deployment with an Ant script has proven to work well. For example our GMX Mail App is available in four different customizations, for different brands, and uses a similar approach with maven.

However, there is a better way now to handle multiple targets. It is less complex and gives you even more options to customize the different targets. By using an Android Library Project, you still have the benefit of sharing resources and code, without the hacky Ant script. Remember, the Ant script would go through every Java source file and change an import statement, just to resolve the different package name of the R File. Switching between Targets required an Ant build with a refresh of the workspace. Not any more. Now switching between projects is as simple as clicking the run button in Eclipse. Especially for bigger projects this is a huge benefit, because refreshing the workspace can take quite some time.

So what is the new setup? You need to create one base project, with everything common inside, and declare it as an Android Library Project. This option is available under project properties in the Android tab. Then you create the first one of your targets as a different Android Project in Eclipse. On the same properties tab of the new project, you add the base project as a library. Repeat it for another project, which will be your second target. Now you will have somthing similar to this:

For the showcase, I deleted all source files of the custom projects. Since we want to re-use the majority of our sourcecode from the base project, we don’t need any custom-sources right now. There is one little fix we need to apply to the AndroidManifest.xml file. The Android Wizard in Eclipse uses relative references to our Activities. This does not work if we want to use our Activity from the base project, because it uses a different package than our custom project. Therefore, we have to specify the full package name to the specific Activity. In my sample the important part looks like this:

That’s it. You can now overwrite all your base resources and source files in the custom projects. Every new feature developed in the base project is immediately available everywhere. Only if there is a need for updating the AndroidManifest, you have to edit it in all custom projects. But this also means you have a fine grained control over the manifest file.

I updated my old example project on Google Code. Feel free to use it as a start for your own project with multiple targets. Feedback and contributions are always welcome.


  • 0

Key learnings from analytics

Category : Android

Enough time has passed since I put Google Analytics into my Android game Laska. It has been collecting statistics for nearly a month now. Therefore, I want to share some data and show the key learnings I got out of it.

The majority of my users are from China and Poland

This was quite unexpected for me. The game is pretty german. It is even named after Emmanuel Lasker, the famous chess player. However, Germany is responsible for only a very small part of the visits. It is interesting for future localizations, because a mandarin translation might make a lot more sense than a german one.

There is always more to track

I started off with only tracking the in-app pages and a variable for won/lost games. But with this new information, there is already the next pending question. How many games were played at the easy level? How many at hard? What is the percentage of won games on hard? There is always more to measure and I will implement it in future versions of the game.

Devices used

The Samsung Galaxy S and HTC Wildfire are responsible for nearly half the usage. The most common display size is 480 x 800 with 42% of the visits, followed by 240 x 320 with nearly 30%. I did not expect the small display devices to have such a big share of the visits. One reason might be, that users in China and Poland are more likely to use lower end android devices than in the US or Germany.

 


Tweets