As I had a coffee-break with a friend of mine (cheers to Andi), we began to talk how it would be possible to simulate a human queue at ski-lifts. Those of you, who are familiar with skiing, may know already what point I'm trying to make.

It may occur, when there is a lot of rush at a specific ski-lift, that there exist different strategies that may lead faster to reaching the lift: For example when the queue takes a curve, the advance inside the queue may be much more faster at the outside of the curve.

In this kind-of study I wanted to examine those different strategies with computational simulations.

The first attempt of simulation, was to model the queue with a three-dimensional potential-field. The ground-potential at a specific point (x,y) must be some kind of ratio of distance (between the starting region and the target-area) and direction where to go. So I decided to take the following assumptions for the ground-potential:

1. The starting-point has the highest potential.

2. The target point must have a potential of zero.

With these points in mind, we can move on. We want to simulate the behaviour of human-beings, so we need to create elements, which represent them. These elements shall be created by a time-based function c(t) at points in the starting region. c(t) represents the rush of elements.

Note that we project a 2d field on our 3d potential-field. The negative gradient of the potential-field represents hereby the direction where one element should go next, to reach the target as fast as possible.

As soon as elements reach the target-area, they can be removed. Likewise with a time dependent function r(t), representing the capacity of transportation of the ski-lift.

But how to model those elements? And how to make sure they don't overlap? We decided to go on with a gaussian bell at the points (x,y) of one element. This bell will be added to the ground-potential, altering the gradient in the area at this specific point. So if not genereted at the same starting point, two or more elements may never overlap, due to their property to move only to the direction of the negative gradient.

With that being said, we recieve the following animation of descending gaussiang bells on the ground-potential: Hereby the bottom left corner (deep blue) represents the target-area and the top right corner (red) is representing the starting region. The ground-potential was calculated using the function phi(x,y)=0.05x**2+y.

The disadvantages of this attempt are obvious. The algorithm is already known and is described by the "gradient descent method". This model allows only a view in the future based on the gradient at current points. Modeled like this, it's only a one dimensional optimization problem. We could not apply deep learning on it (or at least it wouldn't make any sense).

But it gives us a better understanding of the problem.

The main goal of the second concept was to apply q-learning on the problem. For those of you, who do not know anything about q-learning, I recommend the following Video: https://youtu.be/aCEvtRtNO-M. Due to the needlessness of the potential-field, we can reduce the problem now on two dimensions. In the following, we consider the problem as seen from the top. Doing so, we choose the following method:

1. We create elements with defined properties: maximum speed, maximum acceleration, score.

2. Each timestep the score will be reduced, so that the model has the need "to do something", to not lose more points.

3. If the element is accelerating towards the target-area it will receive points.

4. If the element reaches the target-area, it will receive even more points. If the element is removed by r(t) the task counts as accomplished.

5. If the element hits a specified border, or another element, points will be removed and the task counts as failed.

As you probably already saw, it may makes sense to reconsider the usage of a potential field to calculate the received points in (3).

Using the pyglet library, the following model was created in python:

Here the left (double-lined) border represents the target-area and the right border is representing the starting region. The lines located in the middle of the plane could act as some kind of obstacle, for example a turnstile. Obviously the red dot represents an element.

In this first attempts of applying q-learning, the agent shall receive as inputs the current position of the element and multiple points intersections of lines going from the element in multiple directions (representing its view, as shown in the following figure). As output, the agent shall give coordinates, in which direction it wants to accelerate. Firstly we will neglect the impact of non-static acceleration.

A simple deep q-Learning network was created to analyze the simulated environment. You can access the full source-code via github: github.com/flemk/sky

The results of the following attempts shall be discussed.

Attempt No. 7:

Although the first attempt was a failure, it shows the simple working principles of q-learning. It was consideres to add "checkpoints" to the environemnt, where the agent would receive additional points, but this wuold lead to reusing this checkpoint over and over again, if the option of going backwards was enabled. So we'll save that "checkpoint"-idea for later. Obstacles were added in this attempt and the initial maximum velocity was set to 25 (px per cycle).

The following graph represents the received score by the agent each "played" episode (blue). The black line represents an envelope curve. It is clearly recognizable, that the envelope-curve is approaching zero. This is due the fact, that the q-learning algorithm is designed to maximize the score.

In this attemt no. 7 the points were given by the following principles:

1. No operation ("waiting") was rewarded with -1 points.

2. Reaching the target-area was rewarded with +50 points.

3. Each cycle the reward was reduced by 1.

The analysis resulted that the scoring-distribution is not ideal. For example: The agent would "run" into an obstacle or border as fast as possible, to maximize the reward, but would never reach the goal. So the points received in case of a crash should be more negative, while the points received when reaching the target should raise the agents score in positive values.

Attempt No. 41:

While there were several unmentioned attemts, the following attempt no. 41 showed better performance after improving the model. The following changes werde made:

1. The agents started at random starting positions (in the intervals xi=(100,625), yi=(100,275)).

2. The observation (input to the agent) contained only distances between itself and borders, obstacles (not with targets!). As well as its current coordinates, so the input dimension was 9.

The rewards were these:

1. If the agent hits a border, the reward will be -10 (and the episode was aborted).

2. If the agent notices a target within its view-lines distance, it was rewarded with +1.

3. If the agent hits the target, it will receive a reward of +100.

4. Each cycle the reward was reduced by 1.

5. No operation ("waiting") was rewarded with -1 points.

With these policies the model was trained for 23'010 episodes. The following graphs display the received score at each episode:

scores over episodes 22'010 to 22'510. |
scores over episodes 22'510 to 23'010. |

Clearly there is a visible improvement of the behaviour of the agent. In the next attempt it was investigated, how the agent behaves, when the observation additionally containes distances between target and agent.

Attempt No. 46:

At around attempt 46 the reward strategy turned out to be effective. In comparison to attempt 41, potential reward was added. A training of 1500 episodes resulted in the following graph:

A 500 episodes training resulted in 161 crashed but in 339 cases the agent was able to reach the target. Clearly there are two "levels" of scores: The first at around +100 points, but the second at around -10 points. This is due to the agent perceiving a border and running into it, to maximize the reward in the shortest time (also explained in attempt no. 7). Also there may be a mistake in spawning the agents or in the observation of the training environment. Either case was to be improved in future attempts.

Attempt No. 47:

The same method as in attempt 46 was used to create an in 20'500 episodes new trained agent. The first results, where the agent was trained with random starting points, were bad as expected: The two "levels" still existed. While using a fixed starting point, the agent did well, nearly only one "level" existed. The new hypothesis of the emergence of the lower (-10 points) level is now, that the agent is spawned directly in the border, so that the agent does not have any choice but to crash into it. This hypothesis is supported by the fact, that the lower level exists at around a score of -10 to -12. This is the exact calculated score of the agent only existing on cycle befor hitting the border.

Comparing to the model using a fixed starting point, the negative scores were neglectable: in only 12 episodes the agent crashed. The existence of them may be attributed to the choice of epsilon. epsilon was choiced to be 0.01 ehich is not quite zero, so random action may exist occasionally. So a new attempt was made, where epsilon equals zero: No random choices may be taken.

Scores in attempt no. 47 with random starting points. |
Scores in attempt no. 47 with fixed starting point (550, 188). |

In this attempt - described above - the rewarding system was dependent on the potential. The potential-function was choosen to be phi(x,y)=700-(x/100). The plot of the function is shown in the next figure:

The same reward-strategy as in attempt 41 was used.

Attempt No. 48:

The same trained model as in attempt no. 47 was used, but epsilon was set to be zero from the beginning. The results were quite impressive: The agent crashed 5 times, but reached the target 245 times out of 250 episodes. This is represented by the following graph:

The maximum score to be reached was +93. The same reward-strategy as in attempt 41 was used.

Attempt No. 49:

Using the new potential function phi(x,y)=-7*((x+200)**2+8*(y-200)**2)/800000+(791/80) a new model was trained. Its graph is shown in the next figure:

3D plot of the new potential-function. | Contour plot of the new potential-function. |

Obstacles were not yet added - this is to test the effectiveness of the potential-function. The same reward-strategy as in attempt 41 was used.

Comparing the output-plots of the agent-training, spawning at random starting points and starting at a fixed point, we receive the following graphs:

Scores in attempt no. 49 with random starting points. |
Scores in attempt no. 49 with fixed starting point (550, 188). |

It is clearly notable, that the new potential-function yields to better results comparing to earlier attempts. But also it semms that the agent does not reach a constant level of scores.

The results above are only the tip of the iceberg. Many improvements of the model and the environemnt are possible and conceivable. For example the physics engine could be enhanced to take the acceleration and or other aspects into account. Also the most interesting part is yet to come: Multi agent training - how does the model behave, when multiple agents fight with each other to get to the target as fast as possible.

After a lot of hours I put into creating and testing this model, a lot of failures and debatable results, I can say in conclusion: This project was really fun and made it possible for me to get a brief look into deep-learning techniques.

Stay tuned!