Smart City - 4D Digital Twin

Smart City - 4D Digital Twin

Recon AI logo


This document consist description of AM-X platform implementation and how it enables building a 4D Digital Twin of operational environments.


Purpose of the contract is to give clear understanding of the AM-X platform implementation and benefits for clients ecosystems.



AM-X is an IOT/EDGE-AI solution that has been developed for efficient data annotation, AI training and system deployment. AM-X is hardware independent, modular part of client’s data ecosystem and enables building a 4D Digital Twin of real world operational environment.

With EDGE-processing, AM-X is able to process large sets of data in real-time and on-site - without constant mobile network connection. AM-X solves the bottle-neck problem of mobile networks by providing only the relevant information for client’s operations. AM-X is hardware independent and can be cost-efficiently adjusted to any new hardware, which makes it adaptable for rapid development of sensor technology.

Relevant information can provide added value such as EDGE-anonymization, obstacle detection, 4D digital twin, asset management, predictive maintenance and improvement of logistics operations for client’s ecosystem. In addition, the information and sensor fusion can be used for robotization of on-site operations.

In this paper, we provide example use cases and describe benefits of AM-X. In addition, information on implementation process and cost of implementation of AM-X is described in this document.


“Recon AI” - Recon AI Oy, Business ID 2827294-8 ;

“AM-X” - Recon AI platform that provides position information of components (object of interest) from sensor data. AM-X platform is adapted for needs of different industries.

“Relevant information” - Information that the client specifies to be relevant for their application.

“Object of interest” - Object that sensors are receiving information from and client specifies as object of interest.

“Measurement accuracy” - Accuracy with which the device is measuring dimensions, position or distances of objects of interest.

“Measurement interval”  - Interval between which objects are measured.

“Position database”  - Database where objects position information is saved to.

“Output information” - Information that AM-X provides for the client. Usually object of interest related to a structural drawing, point-cloud information of position and orientation of the component and information on environmental conditions.

“4D Digital Twin” - Digital model of client’s operational environment consisting dimensional and time information. Model usually includes other IOT measurement information such as temperature, pressure, structural drawing with metadata, depending on client’s operations.

System infrastructure description

Network infrastructure
  • IOT Sensors - Sensors that are used in client’s ecosystem. These sensors can be any sensors providing relevant information from operational environment.

  • EDGE AI Node (part of AM-X) - First calculation node after the sensor. One node can be used for analyzing several different IOT Sensors data and pre-processing of data is done on this node. Node analyses relevant data from sensors and provides a) loop-back to on-site robotic or assisting information systems and b) Ecosystem relevant information to Ecosystem node. In addition, node handles machinery specific sensor fusion operations.

  • Ecosystem Node (part of AM-X) - Second calculation node after the EDGE AI node can be used for post-analyzing relevant information provided by EDGE AI nodes and and provides a) loop-back to on-site fleet-control operations and b) History relevant information to databases storing historically relevant information. In addition, node handles ecosystem specific data fusion operations.

  • Cloud Database - Cloud database that is used for storing relevant data and providing this data to any 3rd party operational systems wanted. Possible feedback loop from 3rd party systems

  • 3rd party services - Cloud database that is used for storing relevant data and providing this data to any 3rd party operational systems if wanted.

AM-X Security Overview 

 All parts of the system handle only required data without unwanted personal information. Access to data is limited according to use cases.

Save the valuable data without unwanted data. Processing data on EDGE prepares it before saving to meet, for example, General Data Protection Regulation (GDPR) requirements and privacy legislation compliance. AM-X can anonymize data already on-site. For instance, AM-X can blur faces from video on EDGE. As a result, we can make sure that personal information is not saved to databases if it is not needed. 

Access control. AM-X provides data to through APIs with authentication and reveals only the needed information. For example, AM-X can provide HTTP REST API access point to selected data and to selected user. API provides a limited view to data and requires user authentication before the access.

AM-X Security infrastructure

EDGE AI node enrich and cleans the data: valuable AI detections are added and unwanted data is removed. As a result, we do not need to save unwanted data to cloud. Connection between EDGE AI Node and Ecosystem node is either local or secure connection over non-private network (SSH, VPN).

Ecosystem node can be accessed through SSH or VPN which provide encryption and authentication. Primary this is for the database connection. 

3rd party systems can access cloud databases through APIs which provide limited


Detection of objects both for the tracking and the digital twin will be implemented using AM-X platform. AM-X platform is based on neural network algorithms, enabling object detection and localization on edge devices on-site without excessive network load or latency.

Neural networks enable transferring the computationally heavy part of the analysis to the off-site a priori training process which, given sufficient data, enables computationally lightweight analysis on an edge device. The data requirements are further mitigated by various augmentation techniques as well as utilizing synthetic data.

Three cornerstones for digitizing Smart Cities

1. Data Anonymity

By providing data anonymization already on EDGE AI Node, raw sensor information can be provided to markets and more machine vision solutions can be implemented to ecosystem. AM-X Platform can provide pre anonymized data which makes it possible to provide needed raw-data samples openly for the markets and is in compliance on privacy and data storing legistelation.

2. Economical scaling

By moving from ERP centric software ecosystems to OpenData based ecosystems will provide better possibilities to access data for new market operators. With high-quality and open relevant data available in markets without or with low cost structure, market operators are able to provide better services to the field and so forth improve productivity. Such services can be implementing all the way from sensor fusion in edge nodes to big data analytics based on open data information.

3. 4D Digital Twin

AM-X can recognize location of components from city structures. This information can be used for building 4D Digital Twin of the whole city. By applying sensor fusion and using already existing and new sensors in the ecosystem, location of each component can be measured in short time intervals, which will provide time dimension to already existing or new 3D Virtual Models. 4D modelling provides possibility for usage of AR-systems for real-time virtual tour of whole city and same information can be used as part of building information modelling, moving from  preventive maintenance model to predictive maintenance models, providing asset management information, quality-, fleet-, logistic control, and robotic systems.

AM-X Implementation description

Phase 1 - 3D Digital Twin. The client provides a high fidelity measurement of the environment. Based on this, Recon AI will build a framework for the synthetic data and implement a measurement simulation algorithm.

Phase 2 - Detection of immovable objects. Recon AI trains and implements an object detection and localization system for the immovable objects.

Phase 3 - Tracking algorithm. Based on the reference of the aforementioned immovable objects and 3D environment, Recon AI develops and implements a tracking algorithm for the measurement vehicle.

Phase 4 - Detection of movable objects. Recon AI trains and implements an object detection and localization system for the movable objects, and implements it to the clients edge device.

Phase 5 - 4D Digital Twin. Recon AI finalizes the implementation of the 4D digital twin, that is continuously updated with respect to the movable objects.

AM-X implementation

We estimate the first implementation circle to take 8-14 months. Training new object of interest to AM-X platform is estimated to take three weeks on average.




An optimised neural network makes it possible for AM-X to analyse large data sets in real-time. Real-time analysis makes it possible to run AM-X without large data storages.



After training the neural network on cloud, the optimised neural network makes it possible for analyse data on-site with low calculation power requirements. These features makes EDGE processing possible and solves problems caused by mobile network bandwidth restrictions.

Hardware independent.png


AM-X is hardware independent solution that can be trained for any existing and new hardware in client’s ecosystems. Using data from existing and new sensors, benefits of sensor fusion can be maximised.



True to the AM-X ability of processing data on EDGE. AM-X can anonymize data already on-site. On-site anonymization decreases cyber security requirements for the rest of the information ecosystem.

Low power consumption.png


Due to optimised calculation requirement of neural network, AM-X decreases energy consumption of processing and utilises unused processors in EDGE/IOT devices. Decreased power consumption and utilisation of processors in IOT-devices smaller ecological footprint can be achieved.



Recon AI is continuously working on optimising the training pipeline of new components to new sensor data. Optimised training pipeline makes it possible for our clients to apply AM-X on scale that benefits their ecosystem in full.


Obstacle detection.png


Transportation and automotive industry is making huge efforts for making their machinery safer and autonomous - AM-X can track the motion of vehicles and estimate routes in real-time, which makes it possible for using the data for obstacle detection purposes.



Building 4D (real-time) digital twin of city  is one of the key elements for making cities true smart cities. AM-X can provide real-time location information of different components of interests. By applying sensor fusion the platform provides 4D information for building information modelling.

Asset management


By relating real-time location information of components of interest to structural drawings such as CAD, AM-X can provide up-to date information of your assets in buildings and infrastructure. This information can be used as baseline for designing operations and making 5D BIM modelling reality.

Predictive maintenance.png


Predictive maintenance makes it possible for our clients to allocate their resources during maintenance operations. By training AM-X to recognise faulty component and applying measurement abilities of the platform, operators can move from preemptive maintenance models to predictive maintenance models.

Logistic control


During all operations inefficient logistics is one of the major cost-drivers. By applying route tracking abilities of AM-X platform, all on-site logistic operations can be surveilled and optimised.

automation and robotics.png


Automated solutions often requires applying huge datasets. Analysing these datasets with traditional methods requires huge calculation power. AM-X real-time, on-site processing makes it possible for training AM-X for guiding robotic systems and machines autonomously on-site with low requirements for processors in use.

What we actually do?

Company introduction

Recon-AI is a young software company with four people currently working with us, and we are implementing artificial intelligence for machine vision. We are looking for projects and partners. As a software company, we are especially looking for partnerships with hardware manufacturers so that we can get the correct sensors for our customers as well as software partnerships to streamline our implementation process.

Why AI?

So what is the motivation to use artificial intelligence? Compared to traditional techniques, when properly trained, an AI solution can be computationally extremely lightweight while producing high quality results. In other words, AI is very cost-effective enabling real time on-site operation on embedded systems.

Chess example

Here is a fun little example: Stock fish 8 is a chess engine that has been superior to human players for many years now. It considers 70 million move possibilities per second and is taught with basically the entire history of professional chess matches. Google’s Alphazero is a “true” AI that considers only 80 000 move possibilities per second, only played against itself for a few hours, and beat the Stockfish 8 in 27 games out of one hundred and never lost. Now of course, this type of AI is quite different than what we are (at least currently) using, but it highlights the advantages of AI well.

Screen Shot 2018-09-28 at 9.08.23.png


What does an AI need for the lightweight operation? The computationally heavy part lies in the training, for which the AI needs to process a lot of data. However, this, in addition to testing and validation, can be done in advance on for example some computer cluster. The network itself is light, and can be then transferred easily to the on-site machine via for example the mobile network. There is also the added benefit of the automatization of updating the AI for additional functionalities or fidelity as per the client’s needs, which is convenient.

Screen Shot 2018-09-28 at 8.47.45.png

Training the AI

A bit about the training process. AI is essentially a combination of a network architecture and network weights. The architecture must be designed for a given “problem type”, and the weights are then optimized for the problem which is called “training.” So first the network “architecture” must be designed. After that, the training data must be gathered and labeled which means essentially solving the problem for that data, for example, if AI is supposed to find certain points, the points must be marked on the training data. This labeled data can then be augmented, which is essentially using certain tricks to multiply the data for training. Finally, the augmented and labeled data is ran through the untrained network and the weights are optimized by a certain algorithm. The training process gives us the weights which in conjunction with the architecture form the AI that can now be used.


Now, what we are actually offering is a platform that can be utilized to implement AI based solution to various different machine-vision related problems. As I said before, the solutions are expected to be relatively cost-effective, computationally lightweight, convenient to use and to update or expand upon.

What we need from the client is obviously the description of the problem and a large set of representative data. Based on the possible existing sensors and the problem, we can assess whether or how easily the problem can be solved. If there are no existing sources of data, we will assess the needs based on the problem description. As a software company we rely on partnerships with hardware manufacturers to provide the required sensors. It is important to note that although we can certainly assess the suitability of particular type of data, the AI needs to be trained before using or testing it.

Screen Shot 2018-09-28 at 8.48.55.png

Here is an illustration of how the relationship between us and the client could work. As I mentioned, we need the data and the problem description. The sensors, i.e., the source of the data can be pre-existing or new. We will of course help with the choice of sensors should new hardware be needed.  Based on the data and the problem description, we will build the AI, train it, test it and implement it for the on-site application. As the implementation is operational, new data can be gathered for additional functionalities or further improvement of accuracy should the conditions for example change.

AM-RAIL™️ example


Here is an example for the railroad industry. In order for the trains to run safely and on time, the surrounding infrastructure has to work really well. However, the different components of the infrastructure of for example the electrification and the superstructure wear down and may eventually break. If it goes to that point, the economical and even human damages can be quite serious. This is why preventive maintenance and asset management are so important so that it never gets to that point. Thus, extensive active monitoring is required. Many of the current monitoring equipment rely on these heavy measurement wagons that are expensive to buy, run and maintain. This is where we come in.


Our AM-Rail platform is based on relatively cheap sensors that can be installed on for example cargo trains without the need for an external wagon. We then use artificial intelligence to train and analyze the data feed from the sensors, i.e., cameras, lidars, etc., and extract the relevant information from the feed. The data analysis can be done conveniently with a light and cheap embedded device on board of the train in real time. And like I said before, the software can be updated easily with the mobile network. Thus, money is saved.


Here is an example of implementation. The point here is to measure the contact point height with respect to the rail level and its horizontal displacement from the center of the rails. The contact point is located at the top of the red line in the picture. This image is a frame from a video recorded with a single consumer grade camera. First, the AI finds the rails and three points of interest from the frame. The points of interest are the previously mentioned contact point (top of the vertical red line) and the two end points of the pole (endpoints of the blue line). Since this is done with a single camera, the pole is needed for reference as the picture doesn’t contain depth information. After the points and rails are found, the two measures can be calculated using simple math. Here, we reach approximately 3% accuracy with a consumer grade camera and an AI that runs on a cheap CPU and whose training cost next to nothing.

Next, we plan to implement a lidar system in conjunction with the camera in order to get rid of the referencing and increase the accuracy, which for this particular application is important. With the lidar providing the depth information, we can fully map the 3D location of the points of interest with respect to the sensor. We are also planning to introduce detection and measurement of additional components to the solution.

Our overarching plan is to move towards building information modeling by 3D object mapping, detection, orientation and/or location with respect to object information for example as CAD files for different ecosystems. Multiple already existing types of sensors could be utilized such as CCTV cameras, already existing cameras in buses or trams, and so on. A larger model could then be constructed based on the large data set from a variety of sensors located all around the place.


So, in summary, utilizing AI for machine vision can offer high quality information on site and real time with very competitive costs. And we offer such a platform which can be used for AI based recognition, measurement and locating of objects based on optical data. We are currently looking for a pilot project as well as partnerships with hardware manufacturers in order to streamline the platform for clients without proper already existing hardware.

So if you are interested, do not hesitate to contact us! Thank you!

Kalle Koskinen


Recon AI

Implementation of AM-RAIL™





Ecosystem - Added value



In this article, we describe how the AM-RAIL™ can be implemented to our clients' ecosystems. The article includes a description of the implementation process and explains the added value to a railway operations. The goal of this article is to give an idea of how AM-RAIL™ can be implemented to our clients' ecosystem, as we aim to gain pilot projects for AM-RAIL™. 


Implementing AM-RAIL™ does not require expertise of AI-solutions from the client. Only data from the client's ecosystem is required. This data can be gathered from existing or newly installed sensors (Cameras, LiDars, Sonars). The data gathered from the field needs to be related to a CAD drawing of the components. These CAD drawings can also be used for recognition operations by rendering the drawings and relating them to the sensor data.

Implementation process

Phase 1 - Data acquisition. The client provides the CAD drawings, video and LiDar cloud data from the railway network. This can be done by using existing data or by gathering data with a new hardware set-up. 

Phase 2 - Data annotation. Recon AI labels the video and LiDar data and renders the CAD drawings to annotate the data for the training phase. 

Phase 3 - Training. Recon AI plans, builds and tests neural network models for training the AI to gain sufficient recognition performance.

Phase 4 - Deployment. Recon AI deploys the trained AI algorithms to the on-site sensors via API.

Phase 5 - Development. Recon AI further develops the solution in order to recognise additional components of interest and to improve recognition performance and measurement accuracy.

 We estimate the first implementation circle to take 6-12 months.


Ecosystem - Added Value

Immediate added value of the AM-RAIL™ is the automation of measurements performed currently in installation operations. In addition to this, point-cloud information of components can be utilized for object detection, 3D modelling of infrastructure (asset management) and combining information of infrastructural changes for predictive maintenance operations.

Screenshot 2018-11-01 at 8.23.17.png


We are looking for clients to start a pilot with. Interested? Feel free to contact:

+358 (0) 50 587 1254

More information:



Showing is better than telling - take a look at the AM-City™ video.

Recon AI
Henri Memonen



AM-RAIL™ as a part of Smart City




What is a Smart City?



Summary of benefits to different stakeholders

Emphasize on data-acquisition on an early stage


In this article, we describe the use case of AM-X™ platform in Smart City transportation, the requirements for the ecosystem of the client, and implementation of the solution emphasizing the benefits in both short- and long-term. The goal of this article is to give an idea of how AI-solutions can be implemented to our clients' ecosystem and to emphasize the importance of gathering vast amounts of clear data at an early stage in the development. 

What is a Smart City?

”A smart city is an urban area that uses different types of electronic data collection sensors to supply information which is used to manage assets and resources efficiently. This includes data collected from citizens, devices, and assets that is processed and analyzed to monitor and manage traffic and transportation systems, power plants, water supply networks, waste management, law enforcement, information systems, schools, libraries, hospitals, and other community services.”



The ecosystem is commonly built on the following three essential elements:

On a common level, the process of these kinds of ecosystems are quite straightforward.

  1. Sensors measure the environment and collect data for analytic platforms, with time and location labels.

  2. Analytics platforms provide analyzed guidance information to the client’s ERP-system.

  3. On ERP-system, operators are inspecting and labeling the data, and the labels are provided back to the analytics platform.

  4. Self-learning algorithms on the analytics platform read measurement data to provide more accurate predictions per each new data-point.

Screen Shot 2018-04-10 at 14.10.44.png

When implementing the AM-RAIL™ as a part of Smart City, a variety of options for the sensors, analytic platform models and ERP-systems are available for the ecosystem. Some of the options and their related considerations are listed below as suggestions that we recognize as valuable to the Smart City ecosystem.


All sensor data should be labeled with time and location labels when using the sensor.

 Analytic platform models:


We have categorized the labels into the following categories:

  • Fault labels (a label that indicates a fault in the ecosystem)

  • Effect labels (a label that indicates causal effect in the ecosystem)

  • Component labels (data that is labeled relation to a component)

  • Simulated data labels (simulated data label from engineering models)


We often hear from our networks that they do not wish to be first ones to implement AI-solutions to their own operations, but want to wait for markets to mature and consider applying AI-solutions at a later stage. We believe that the major fault in this reasoning is that after AI-solutions have developed enough, the AI would be easily applicable to various operation models. Unfortunately, this mix-up of the intelligence explosion and use-case based machine learning application is a common mistake. As intelligence explosion would make a machine that only requires calculation power to develop, machine learning requires adaptation to the specific user cases. If the assumption of the intelligence explosion is faulty, it does not matter, because the machine would still be solving most of our problems, one way or another.

The second line of reasoning we often hear from our clients is that the system would only bring benefits after several years of training of the AI, which would mean that ROI for the project investment does not make sense. Our belief is that when the implementation process is built smartly, we can add value to the clients´ ecosystem early on in different phases that support each other. This way the investment will start paying back quickly, and the benefits of the system will increase when more data is gathered.

On implementing AM-RAIL™ to railway and city maintenance operations, we estimated the following timetable and benefits from the system.

Screen Shot 2018-04-11 at 9.30.24.png

Phase 1, Sensors and improvement of the current planned predictive maintenance by providing an increasing amount of data from the field. This data can be used as an alarm-rate maintenance operation right after the sensor data is gathered. The data is stored in the database. The estimated time for implementation is 1,5 – 2 years. Components can be recognized and the information can be used for ownership management.

Phase 2, An analytics platform is chosen and implemented. The analytical platform can recognize decay rates from the data gathered from the sensor. The decay information can be used for maintenance operations and the operations can be moved towards a preventive maintenance model.

Phase 3, The analytics platform starts to receive labels from the ERP-systems for self-learning algorithms. The self-learning algorithms will receive fault, effect, component and simulated data labels from ERP-systems which leads to the platform performing better with each new data point. Maintenance can be moved towards a predictive maintenance model.

Phase 4, AR-solutions can be implemented to operations to provide guidance for the field operators. While providing guidance the guiding person can label components from the pictures and train machine vision solutions for recognition of these components. After a sufficient amount of component labels, machine vision can be used for guiding robotics and maintenance the operations can be further automated in steps. The ultimate goal is set to automate maintenance operations fully.

As we see, AI-solutions will bring benefits to operations also in short-term.


A summary of the benefits to different stakeholders

Construction and maintenance operators

Recognizing the need for maintenance before additional cost occurs - moving to a Predictive maintenance model. This increases route capacity and decreases costs of maintenance operations by decreasing down-time of the track. In addition, component recognition can be used to maintain uptodate information for ownership management.

By adding measurement solutions to provide data for a BIM engineering software and providing real-time uptodate information, the right baseline information can be provided for designing operations. These measurement solutions will also remove the need for manual measurement of structures and gathering the measurement data into database can be automated.

In addition, an AR-platform can be used to increase efficiency and quality in the installation operations by providing guidance remote guidance during installation operations. At the same time, operators can label videodata and labels can be used for the training of machine vision solutions.

For traffic and transportation operators

Moving into a Predictive maintenance model decreases requirement for maintenance operations on the route and improves route capacity.  Facial recognition system provides smooth passenger transportation and provides baseline for TaaS-solutions.

Smart city, citizens of the city, Institutes and companies operating in the city

Camera systems attached to transportation machinery can measure and recognize components from surrounding structures in its route meaning anything from the surveillance of street-lights to the condition of buildings. Changes in these structures can be surveilled and data can be used for the allocation of maintenance resources. The same data can then be used for building a uptodate digital model of the city. All this data can be used as a platform for new inventions resulting in a better quality of life for the citizens.


Emphasis on data-acquisition on an early stage

Implementing AI-solutions to most of the ecosystem requires a lot of data to work with. The data needs to be raw, clear and labeled to provide sufficient information for self-learning algorithms that aim to provide information for operations and finally make automation possible for the application.

When a system requires huge amount of clean raw-data from several sensors, the makes cost of a single measurement to work as major cost-driver. This means heavy and expensive sensors wont do the trick, instead by using multiple cheap sensors in the ecosystem most of the benefits can be achieved.

In addition, the know-how of the field operators needs to be digitalised and stored as data. Machine learning means that the machine needs to learn what to do, and so far humans are acting as teachers. This means personnel working in the ecosystem needs to use digital tools on their operations in order to provide sufficient information for machines to eventually operate automatically.

We recommend all operators to start collecting data by implementing cheap sensors to their ecosystems and storing the raw-data to databases for the future’s needs. Ungathered information can never be recovered.


Recon AI
Henri Memonen


Don’t wait FOR your data to arrive -

let’s get it for you.