In the rapidly evolving world of edge computing and artificial intelligence (AI), there are several crucial stages to consider. This blog delves into the complexities and innovations at each stage, beginning with Local Execution, where AI models are deployed directly on edge devices for real-time data processing. We then explore Contextualization, focusing on the local handling of contextual information for personalized responses. The third stage, AI to AI Communication, examines the critical coordination between multiple AI nodes, facilitated by edge microservices. Finally, AI-adapted Choreography highlights how multiple AI models across an edge network can dynamically interact with each other, optimizing overall system performance. Through these stages, the role of mimik technology emerges as pivotal, enabling seamless integration and efficient operation of AI models in edge computing environments.

Stage 1: Local Execution

In this stage, the focus is on deploying the AI model at the edge, which means running the model directly on the device that generates the data. Typically, the model is trained in the cloud and then pushed to the edge devices such as cameras or sensors. The purpose is to perform real-time recognition or analysis of data streams locally without relying on constant communication with the cloud.

The information generated by the local execution can be handled in different ways. If the recognition results are conclusive, only the result is sent to the cloud for further processing or storage. However, if the recognition is inconclusive, the image or relevant data may be sent to the cloud to retrain the model. Additionally, a lower resolution of the data stream can be archived for reference purposes.

For example, consider a security camera system using edge computing. The camera captures live video footage and runs an AI model locally for real-time object detection. Instead of sending every frame to the cloud for analysis, the AI model is deployed directly on the camera. The camera processes the video stream locally, identifies objects of interest, and sends only the relevant information, such as detected objects and their locations, to the cloud for further processing or storage.

It is essential to separate the model from the execution process because models need regular updates and the ability to manage the payload remotely. Mimik enables this separation by treating the model as a part of the edge microservice running on the device. The microservice acts as an interface between the cloud and the AI process, abstracting the handling of model updates from the recognition process. Another edge microservice handles the results, whether sending them to the cloud or other local systems. This ensures that the model can be easily updated and fine-tuned without disrupting the process of recognition or analysis.

By exposing the capabilities of handling the model and results as a local API, mimik simplifies the development process of AI solutions, making integrating edge computing into the workflow easier.

Stage 2: Contextualization

In this stage, the model is executed locally, and the handling of the context in which the process occurs is also done locally. The context refers to events received by the device running the process or other devices within the same cluster, such as events triggered by user inputs through a UI or sensor inputs.

Local contextualization allows for the personalization of the model based on user preferences or specific scenarios. By processing events locally, edge devices can provide tailored experiences or responses without constantly sending data to the cloud for analysis and decision-making.

For example, consider an intelligent home system using edge computing. The system includes various devices like smart speakers, cameras, and sensors. Each device runs AI models locally to process data and respond to user commands. When a user speaks a command to a smart speaker, the AI model on the speaker processes the command locally, taking into account the context of the user’s preferences and the current state of the home environment. The speaker can provide personalized responses or control other devices within the cluster based on local contextual information.

Mimik achieves contextualization by running multiple edge microservices on the same node and facilitating interaction with other edge microservices on different nodes. This decentralized approach minimizes the need for data transfer to the cloud, as the devices within the cluster can communicate and share contextual information directly.

Stage 3: AI to AI communication

In this stage, there is the realization that a complex system at the edge will be made of many nodes that can have an AI handling the node’s logic. In this environment, while the execution of the model happens at the edge, the integration between each AI is coordinated via the cloud. It must be possible to allow direct communication between each AI to handle local decision-making by having the different AI either exchange the models or exchange the events generated by the API process using the models.

For example, consider an autonomous driving system using edge computing. The system comprises multiple edge devices, such as cameras, LiDAR sensors, and control units, each running its own AI model for perception, decision-making, and control. These devices must exchange information and coordinate safe and efficient driving decisions. Instead of relying solely on a centralized system in the cloud, direct communication between the edge devices’ AI models is essential for local decision-making.

Mimik enables AI-to-AI communication by allowing models to be handled by edge microservices and creating an ad-hoc edge service mesh. This allows direct communication between edge microservices within the same node or between edge microservices running on different nodes. With mimik, multiple AIs at the edge can exchange information or models with a well-defined contract, facilitating coordinated actions without heavy reliance on a centralized cloud system.

Stage 4: AI-adapted choreography

In this stage, the focus is on dynamically choreographing the behavior of multiple AI models across the edge network to optimize overall system performance, resource allocation, and coordination. The communication between AI models within each node and between nodes adapts to maximize the relationship of a collection of nodes.

For example, let’s consider a smart city infrastructure using edge computing. The infrastructure consists of various edge devices deployed throughout the city, such as traffic cameras, environmental sensors, and smart streetlights. Each device runs its AI model to perform specific tasks like traffic monitoring, air quality analysis, and intelligent lighting control.

In the AI-adapted choreography stage, the AI models within each device collaborate and communicate to optimize the overall performance of the smart city infrastructure. The models exchange information about traffic conditions, environmental data, and lighting requirements. Based on this information, they dynamically adapt their behavior to ensure efficient traffic flow, minimize energy consumption, and respond to changing environmental conditions.

Since these systems are generally developed by many organizations (different standards, different protocols), the context and the AI of each system component will also help define the protocol between the components, allowing components that are not necessarily made to communicate with each other to exchange information.

Mimik plays a crucial role in enabling AI-adapted choreography by providing the infrastructure for communication and coordination between the AI models across the edge network. It allows the AI models running on different devices to exchange data, share insights, and collectively make decisions to optimize the operation of the smart city infrastructure. Mimik’s edge service mesh facilitates the dynamic choreography of AI models and ensures efficient collaboration.

In summary, in the AI-adapted choreography stage, mimik enables the dynamic coordination and optimization of multiple AI models across an edge network, allowing them to collectively achieve better system performance, resource allocation, and coordination in complex scenarios like a smart city infrastructure.


The role of mimik, as mentioned in the text, is to enable these stages by treating the AI model as a part of the edge microservice running on the device. It abstracts the handling of model updates from the recognition process and facilitates the exchange of information between edge microservices. By providing a local API and creating an ad-hoc edge service mesh, mimik simplifies the development process and integration of edge computing into AI workflows.


  1. “Edge Computing: A Survey” by Shi et al. (IEEE Access, 2016):
    • This survey paper overviews edge computing, its challenges, and potential applications, including AI at the edge.
  2. “Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing” by Satyanarayanan et al. (Proceedings of the IEEE, 2019):
    • This paper discusses the concept of edge intelligence, including the execution of AI models at the edge and the benefits it brings.
  3. “Bringing AI to the Edge: Distributed Learning in IoT Systems” by Yang et al. (IEEE Network, 2019):
    • This article explores the challenges and techniques for deploying AI models at the edge, including model training and coordination in distributed IoT systems.
  4. Official mimik documentation and resources:
    • To understand the specific capabilities and features of mimik in enabling edge computing and AI integration, you can refer to the official mimik documentation, whitepapers, and developer resources available on the mimik website or other official channels.

Join our newletter

sign up to receive an email on the latest mimik updates, features, and events

Related Articles

Subscribe to our newsletter