What we actually do?

Company introduction

Recon-AI is a young software company with four people currently working with us, and we are implementing artificial intelligence for machine vision. We are looking for projects and partners. As a software company, we are especially looking for partnerships with hardware manufacturers so that we can get the correct sensors for our customers as well as software partnerships to streamline our implementation process.

Why AI?

So what is the motivation to use artificial intelligence? Compared to traditional techniques, when properly trained, an AI solution can be computationally extremely lightweight while producing high quality results. In other words, AI is very cost-effective enabling real time on-site operation on embedded systems.

Chess example

Here is a fun little example: Stock fish 8 is a chess engine that has been superior to human players for many years now. It considers 70 million move possibilities per second and is taught with basically the entire history of professional chess matches. Google’s Alphazero is a “true” AI that considers only 80 000 move possibilities per second, only played against itself for a few hours, and beat the Stockfish 8 in 27 games out of one hundred and never lost. Now of course, this type of AI is quite different than what we are (at least currently) using, but it highlights the advantages of AI well.

Screen Shot 2018-09-28 at 9.08.23.png


What does an AI need for the lightweight operation? The computationally heavy part lies in the training, for which the AI needs to process a lot of data. However, this, in addition to testing and validation, can be done in advance on for example some computer cluster. The network itself is light, and can be then transferred easily to the on-site machine via for example the mobile network. There is also the added benefit of the automatization of updating the AI for additional functionalities or fidelity as per the client’s needs, which is convenient.

Screen Shot 2018-09-28 at 8.47.45.png

Training the AI

A bit about the training process. AI is essentially a combination of a network architecture and network weights. The architecture must be designed for a given “problem type”, and the weights are then optimized for the problem which is called “training.” So first the network “architecture” must be designed. After that, the training data must be gathered and labeled which means essentially solving the problem for that data, for example, if AI is supposed to find certain points, the points must be marked on the training data. This labeled data can then be augmented, which is essentially using certain tricks to multiply the data for training. Finally, the augmented and labeled data is ran through the untrained network and the weights are optimized by a certain algorithm. The training process gives us the weights which in conjunction with the architecture form the AI that can now be used.


Now, what we are actually offering is a platform that can be utilized to implement AI based solution to various different machine-vision related problems. As I said before, the solutions are expected to be relatively cost-effective, computationally lightweight, convenient to use and to update or expand upon.

What we need from the client is obviously the description of the problem and a large set of representative data. Based on the possible existing sensors and the problem, we can assess whether or how easily the problem can be solved. If there are no existing sources of data, we will assess the needs based on the problem description. As a software company we rely on partnerships with hardware manufacturers to provide the required sensors. It is important to note that although we can certainly assess the suitability of particular type of data, the AI needs to be trained before using or testing it.

Screen Shot 2018-09-28 at 8.48.55.png

Here is an illustration of how the relationship between us and the client could work. As I mentioned, we need the data and the problem description. The sensors, i.e., the source of the data can be pre-existing or new. We will of course help with the choice of sensors should new hardware be needed.  Based on the data and the problem description, we will build the AI, train it, test it and implement it for the on-site application. As the implementation is operational, new data can be gathered for additional functionalities or further improvement of accuracy should the conditions for example change.

AM-RAIL™️ example


Here is an example for the railroad industry. In order for the trains to run safely and on time, the surrounding infrastructure has to work really well. However, the different components of the infrastructure of for example the electrification and the superstructure wear down and may eventually break. If it goes to that point, the economical and even human damages can be quite serious. This is why preventive maintenance and asset management are so important so that it never gets to that point. Thus, extensive active monitoring is required. Many of the current monitoring equipment rely on these heavy measurement wagons that are expensive to buy, run and maintain. This is where we come in.


Our AM-Rail platform is based on relatively cheap sensors that can be installed on for example cargo trains without the need for an external wagon. We then use artificial intelligence to train and analyze the data feed from the sensors, i.e., cameras, lidars, etc., and extract the relevant information from the feed. The data analysis can be done conveniently with a light and cheap embedded device on board of the train in real time. And like I said before, the software can be updated easily with the mobile network. Thus, money is saved.


Here is an example of implementation. The point here is to measure the contact point height with respect to the rail level and its horizontal displacement from the center of the rails. The contact point is located at the top of the red line in the picture. This image is a frame from a video recorded with a single consumer grade camera. First, the AI finds the rails and three points of interest from the frame. The points of interest are the previously mentioned contact point (top of the vertical red line) and the two end points of the pole (endpoints of the blue line). Since this is done with a single camera, the pole is needed for reference as the picture doesn’t contain depth information. After the points and rails are found, the two measures can be calculated using simple math. Here, we reach approximately 3% accuracy with a consumer grade camera and an AI that runs on a cheap CPU and whose training cost next to nothing.

Next, we plan to implement a lidar system in conjunction with the camera in order to get rid of the referencing and increase the accuracy, which for this particular application is important. With the lidar providing the depth information, we can fully map the 3D location of the points of interest with respect to the sensor. We are also planning to introduce detection and measurement of additional components to the solution.

Our overarching plan is to move towards building information modeling by 3D object mapping, detection, orientation and/or location with respect to object information for example as CAD files for different ecosystems. Multiple already existing types of sensors could be utilized such as CCTV cameras, already existing cameras in buses or trams, and so on. A larger model could then be constructed based on the large data set from a variety of sensors located all around the place.


So, in summary, utilizing AI for machine vision can offer high quality information on site and real time with very competitive costs. And we offer such a platform which can be used for AI based recognition, measurement and locating of objects based on optical data. We are currently looking for a pilot project as well as partnerships with hardware manufacturers in order to streamline the platform for clients without proper already existing hardware.

So if you are interested, do not hesitate to contact us! Thank you!

Kalle Koskinen


Recon AI