Skip to content


Frequently Asked Questions

EdgeCloud & Hybrid Edge Platform

Edge computing is a distributed information technology architecture in which client data is processed at the periphery of the network, as close to the originating source as possible. It pushes the frontier of computing applications, data and services away from centralized nodes.

Edge computing refers to data processing power at the edge of a network instead of holding that processing power in a cloud or a central data warehouse. The name “edge” in edge computing is derived from network diagrams; typically, the edge in a network diagram signifies the point at which traffic enters or exits the network
The major advantage of this architecture is rapid and low-cost deployment of (computing and/or storage-intensive) applications on generic servers shared amongst many applications with the aid of virtualization and orchestration technologies.
  1. The current hierarchical architecture makes central cloud resources and network connectivity the bottleneck for future growth. Sending data from hundreds of billions of client devices to tens of millions of centralized cloud servers wastes bandwidth and energy and it has serious social and economic implications

  2. Another disadvantage of central cloud architecture is the developers’ reliance on cloud service providers who have access to the apps and the data stored or processed in their servers. As a result, today, a handful of very large companies have control over the vast majority of consumer and enterprise data.

  3. Despite all the sophisticated security measures, storing data and hosting applications on third-party resources exposes the owners of the information to risks. Cloud resources have been designed for easy access to millions of developers and application service providers which in turn has increased vulnerabilities and security holes.


Based on Hybrid edgeCloud architcture, Much of the processing is performed at the edge, communication is kept as local as possible, and edge nodes collaborate and share computing and other resources.

The benefits of such an architecture are phenomenal:

A) reduced cloud hosting costs,

B)reduced communication bandwidth and network efficiency,

C)reduced energy consumption and carbon emission,

d) reduced latency,

e) reduce application development time,

f) embrace the microservice trend,

g) increased data privacy

h) providing consumers and enterprises better control over their data.

i) scalability

j)minimize transport costs and latencies for applications


The “central cloud”, meaning servers in data centers, remain as valuable resources as they may be indispensable for many applications that require central storage or processing.  Data center resources need to increase but at a reasonable pace to accommodate the needs for central processing only and relegating all the other possible tasks and functions to edge nodes where today most of the data is generated.  Servers in data centers will no longer be the bottleneck or the “always necessary” trust elements and do not need to grow in proportion with edge nodes but only in proportion to the needs of central processing as dictated by use cases and applications and not by an outdated choice of architecture.

A distributed edge computing architecture means that nodes (connected devices primarily) do not need the central cloud since they can process the data independently and communicate directly, sharing resources and collaborating in a dynamic hierarchy. Yet, in today’s cloud computing ecosystem all nodes are connected to the cloud; most nodes transmit raw data back to the cloud and communicate through the cloud in a fixed hierarchy that does not enable nodes to share resources and collaborate.


Fog computing’s processing efforts are focused at the local area network end of the chain whereas edge computing pushes these efforts closer to the data source. Hence, each device on the network plays its own role in processing the information, instead of using a centralized server for the majority of processing.

Real-time or near real-time data analysis as the data is analyzed at the local device level, not in a distant data center or cloud.

Lower operating costs due to the smaller operational and data management expenses of local devices versus clouds and data centers.

Reduced network traffic because less data is transmitted from local devices via a network to a data center or cloud, thereby reducing network traffic bottlenecks.

Improved application performance as apps that don’t tolerate latency can achieve lower latency levels on the edge, as opposed to a faraway cloud or data center.

Edge computing is ready for deployment now in almost all industries. For example, fitness centres can take advantage of edge computing to enhance their members’ experience. By turning the fitness equipment into edge devices, it is possible to connect the equipment with the user’s mobile device and wearables to monitor the users’ fitness progress and engage with them at the right time.

Self-driving cars are another example where edge computing is indispensable. These cars generate about 1 GB/sec of data. Obviously, it is not feasible to send all this data back to the cloud. Edge computing potentially transforms the cars to data centers on wheels where most of the processing is performed locally. Cars can communicate peer-to-peer hence reducing bandwidth consumption and latency. Imagine two self-driving cars about to crash. They need to make decisions quickly and hence the latency of a central cloud system is not feasible. Self-driving cars need to make agile decisions and edge computing enables the instantaneous processing of information. Basically, it enables cars to rapidly decide when to break, speed up or change direction, and communicate directly will all the cars nearby.

Moreover, with edge computing you can network all devices inside a car. Passengers can simultaneously connect multiple devices to the car’s infotainment unit, create a cross-device jukebox, easily and directly share content. It even allows for taxi drivers to securely offer internet connection to passengers.

A third example of how edge computing will disrupt our daily lives is in our homes. Edge computing can turn a set-top box (STB) to a cloud server. The benefits are enormous:  better cross-screen media sharing than the current Airplay or Chromecast systems, users can deploy and launch services such as smart home quickly and reliably, and there is potential to group STBs and share resources.

Beyond these use cases, there are many more: connecting all electronic gadgets and appliances directly, allowing device manufacturers to harness the collective power of the devices deployed, enabling new features in social media applications, connecting drones, turning devices such as mobile phones to sensor hubs used in agriculture and mining to collect and process data and even share resources across devices, etc. Edge computing will disrupt every business across industries, redefining our digital lives.

mimik technology gives companies and end-users the opportunity to turn any computing device into an edge cloud server. mimik’s disruptive technology solves seamlessly the challenge of networking –Discover, connect, and communicate- computing devices regardless of type, operating systems, and location. Thus, it enables edge computing and leveraging the collective power of edge devices.


The mimik edgeEngine is a collection of mimik software libraries and corresponding APIs. Developers can use it to efficiently solve the fundamental challenge of networking nodes in the new hyper-connected and highly mobile distributed edge computing world. Delivering this in a heterogeneous environment, regardless of OS, manufacturer, and connected network is a non-trivial challenge. edgeEngine can run on any mobile device, fixed gateway, autonomous car gateway, connected TV or even in the cloud, depending on the application use case. Once the edgeEngine is downloaded onto a device, it becomes a cloud edge node. Hereby, we will refer to any device with mimik edgeEngine as a “node”.

mimik edgeEngine resides between the operating system and the end-user application. There are several microservices available from mimik and the edgeSDK provides ability for 3rd parties to develop their own microservices. The runtime environment for microservices is also provided by mimik edgeEngine. (We really need a high-level diagram/video/animation outlining the compontets of edgeEngine and the interations among them )

By incorporating edgeEngine, computing devices are transformed into intelligent network nodes, able to form clusters.

mimik edgeEngine takes away complexity of networking among distributed edge cloud nodes, enabling developers to focus on their solution in a microservice model even on small mobile devices.

The mimik edgeEngine provides native class wrappers (or API wrappers) for all supported platforms, in the interest of shortening the learning curve and accelerating development. These are available for:

  • Android, developed in Java

  • iOS, developed in Objective-C

  • Linux, Windows or Mac OS X, developed in C++

In a nutshell, mimik edgeEngine enables developers to rapidly build exciting new applications by turning computing devices into edge cloud servers. 

edgeEngine provides Discovery, Connection and Communication between nodes both on the physical and microservice level and the benefits are countless:

  • Automatic device discovery with no need for extra signalling or control.

  • Ability to create a micro-cloud cluster network of edge devices via local WiFi without internet.

  • Node-to-node file sharing and beaming (casting) of content from within any app.

  • Advanced peer-to-peer networking without the hassle of low-level network setup or programming.

  • Microservice runtime environment for many platforms including mobile devices.

Mimik is a distributed edge cloud software platform that turns heterogeneous computing devices into capable edge cloud servers. In this sense, mimik has developed and launched the first-ever cross-platform SDK for a heterogeneous edge cloud.

The main differences compared to companies trying to offer the same services is that they have several limitations. They either are limited to specific operating systems such as Android or iOS, require particular devices as edge servers (for example, just PCs and not smartphones), only offer media sharing, do not have microservice runtime environment or even support, and most of them do not have an SDK for developers.

The main difference is that AWS Greengrass nodes do not discover other nodes at the edge. Both Azure IoT and Greengrass need manual configuration. For example, for each node to become part of a Greengrass group, it has to be loaded with the Greengrass IoT client software manually. Since there is no app-store or application bundle to download/install easily, it has to be registered manually with the AWS backend, authorized via credentials generated through the AWS console and saved securely in the IoT device. All this before it can connect to its local gateway, that is also a manually configured device. We do all the above but automatically.

Actually, Azure or AWS Greengrass IoT are a perfect candidate for an IoT microservice hosted on top of a mimik Edge node. They allow connecting “IoT islands” to mimik Edge clouds (clusters) based on scope, enabling the formation of application and use-case specific IoT network overlays on top of physical, disjoint networ

mimik Technology

We do not do network mesh. Instead, our platform enables service mesh, which allows service-to-service communication.

This is the major differentiation. Other IoT technology requires a local gateway, to which IoT devices connect, and all communication among these device are proxy’ed through by the gateway.

With our technology, every device can be turned into a server, and thus service-to-service communication happens without any gateway.

  1. Please elaborate?

    mimik allows a microservice running on a device, and this turns the device to a server.

    The microservice running on the device can also serve personal information, and of course, only permitted entity has access to this information. The microservice can also act as a consumer of another microservice, aka service-to-service communication.

No, there is no need for rooting a device.


no routing protocol needs to be modified.

In fact, we provide service discovery on over linkLocal, account, and proximity. These service discoveries give you the address to directly access the service, and in the case if the device is behind the firewall, the address is to a secure tunnel.

mimik technology enables running serverless microservices on any compatible OS, and these serverless microservice serves RESTful APIs over HTTPS, which is a TCP based protocol.

mimik technology can work over IP based network connections, such as Wifi, ethernet, and cellular (LTE/5G) connection.


The throughput is determined by the underlying network connection that the device currently is using. In other words, if two devices have direct linkLocal network connection, the throughput really is the entire local network bandwidth.

If two devices do not have a direct link-local network connection, but have internet connection, The communication is tunneled via mimik tunnelling service, and the throughput is now limited by both the upload internet speed of device A and download internet speed of device B.

Because mimik is using serverless microservie, resource is only consumed when an API calls happen. In other words, one can host as much serverless microservice as the storage allows.


If devices are under the same linkLocal network, the devices can still discover among themselves via the supernode technology. The elected supernode for the link local network contains the local IP of these devices. In other words, if one service on device A want to communicate with another service on device B, service on device A will go to the supernode to find the local IP address of service on device B. After that service on device A will connect to service on device B using the local IP address of device B.

Deployment and integration


mimik hybrid edgeCloud platform back-end hosted on Amazon Web Services. mimik and Amazon formed partnership after Amazon did a technical DD on mimik’s architecture, scalability, security and reliability.  Additionally; mimik offers a rich suite of application domain SaaS and edge serverless microservices for healthcare, fintech, gaming, AI, IoT, EdTech, etc. The application developers that are using mimik hybrid edge engine can host their own application domain back-ends on any public or private cloud of their choice.  

  • mimik edgeEngine and services such as FreeRTOS edgeSDK are downloadable software and therefore doesn’t provide low level hardware access.  The edgeEngine provides information of OS, storage, memory, cpu, connectivity – IP address.  The back-end is hosted in AWS so we don’t provide a view to availability of resource as cloud providers already do that.  

  • We have partners that provide machine vision as a service over GPU, CPU and FPGA.

  • In regarding to invoking RESTful APIs on a custom backend, a serverless microservice can be developed and deployed to the edgeEngine in order to make calls to those RESTful APIs on a custom backend.

The back-end is highly scalable (as was validated with Amazon solution architects) and can support millions and millions of edge nodes and mainly b/c it was designed from ground up for supporting hybrid edge.  The edge nodes are not polling back-end constantly.  Everything runs in the most adhoc fashion.  Nothing is pre-configured and communications amongst nodes are done via bootstrap model. 

  1. mimik provides Hybrid Edge Cloud computing to enable any computing device outside the data center with OS, CPU, memory, storage and connectivity  to act as a cloud server to help app developers unlock the next generation of apps for the hyper-connected world. Such edge devices could include an Operator’s CPE (OpenRAN e.g. with Lime microsystem), Wifi gateway, Smart TV, game council, smart phone, Robot as well as a sensor*. The engine is agnostic to OS, device, networks and public/private cloud. It is non-proprietary and works with existing standard development tools and language. mimik also provides ready to deploy application domain SaaS (cloud and/or edge) for a wide range of industry verticals. mimik platform enables an edge-in versus a cloud-out approach to build applications faster and drastically reduce infrastructure cost, minimize latency and improve security and data privacy, and enable direct app-to-app communication via the standard RESTful API first and serverless microservice driven architecture.  mimik’s platform is already used in multiple industry sectors including Fintech, signed up partners such Amazon Web Services and is part of 5G Open Innovation lab.

    • For more technical information on mimik technology, please see the following videos: 

    o   Siavash Alamouti keynote speech at the IEEE conference

    o   microservice edge cloud #1

    o   microservice edge cloud #2

    o   microservice for mobile developers


mimik has made an effort to ensure that it’s not introducing anything proprietary that forces application developers to use anything special but develop their applications based on standard RESTful API first with microservice architecture. Therefore, there is no need for any specific provider unless it’s driven by business need and application specific use case.  


mimik hybrid edge cloud platform has been developed with security first approach.  As a result, there are 6 levels of security built-in to the platform that includes.

Communication level:

 edgeEngine to edgeEngine authorization and authentication

Data level:

AES -128bit encryption with 256bit key.

Protocol level

https & secure WebSocket

API level

All API(s) secured via OAuth 2.0 tokens for authorization and authentication.

Application level

OpenID connects.

Container level

Every microservice runs within its own container.

It depends on how the application is getting published. If this is a consumer application that is getting published on google or apple app store, the edgeEngine with the application and serverless microservices developed by the application provider for the application is getting bundled as an APK and published on the App store. If this is an enterprise app, it goes through the same enterprise application/solution deployment that the enterprise is using.  In both cases after the deployment and based on the application use case, logic, condition etc. the serverless microservice can also get deployed dynamically.  The light container is the provided by mimik as part of the edgeEngine is compatible with Docker and has the same API semantic as Docker.  This means that an image that is hosted in docker in e.g. cloud can be pushed to the mimik edgeEngine and vice versa. Also; an image on mimik edgeEngine can be pushed to another edgeEngine all based on application logic.  Also given we provide the same API semantics as Docker the DevOps can still use tools such as Kubernetes for orchestration and IBM Open Horizon for application management or other tools that they are using. This is why mimik made the effort to ensure from the application developer point of view everything remains the same and there are no proprietary tools or providers necessary with mimik. mimik has taken care of all the underlying challenges and provides the necessary context via RESTful API(s) for application developers and DevOps to continue to focus on their business. 

There are no specific dependencies.  Here are the platforms that mimik supports

Here are the platforms that mimik supports:


  • One application that you can use to see mimik underlying hybrid edge cloud platform in working is mimik access that can be downloaded from 

  • o   smart car/city with AirLinq

    o   content sharing with Lime microsystem 

    o    gaming with 3BD

  • mimik currently has customers in wellness, healthcare, fintech, smart mobility that either launched their applications using mimik platform or in the development phase.  mimik is also actively in discussions with large industrial IoT for manufacturing automation, mining, department of Healthcare part of DoD and many others. The feedback is that mimik is the only solution that they have seen that can meet their full business requirements, and billion-dollar target revenue growth. mimik has a first mover advantage with a very unique value proposition that enables enterprises to unlock their business opportunities. 

mimik provides federated identity as well.  We can jointly asses the potential partner/customer + mimik identity service interactions.   As for the API please go to Swaggerhub and search for mimik mID.  All API(s) are published there. If you search for mimik in swaggerhub you’ll see all mimik services API and can test them as well


edgeEngine JavaScript Serverless

mimik edgeEngine provides a Serverless JavaScript programming environment for developers to develop custom microservices. For more information, please read the following page: edgeEngine serverless JavaScript programming API.

If you have read the aforementioned documents, and are still uncertain about how to use the JavaScript to develop microservices, please let us know. We have open source projects that also might help you understand JavaScript microservice development: edgeEngine open source edge microservices.

Using edgeEngine, you can develop microservices using JavaScript once, and then deploy them across all platforms. We have indicated this in our development documentation.

The foundation of mimik technology is to allow server development and deployment on edge nodes. We turn nodes into cloud servers which means developers can develop microservices based on serverless architecture, developing server-type functionality similar to what’s being developed on Amazon Lambda using NodeJS. Server developers are not using OS specific languages/APIs. We had to make the container layer work with JavaScript. To keep the servers (microservices) OS agnostic on the edge (similar to NodeJS). However, on the application level, you use any language to call the JavaScript microservice. For example, mimik access uses three JavaScript microservices, while the application itself is developed using platform-native code. This gives developers the flexibility to easily deploy microservices across all platforms, making only the application OS dependent.

Serverless is an architecture and set of technologies that move on-demand computing to the next level since a request will trigger the deployment of the function that handles the request itself. Serverless is a misnomer since you still need a listening component (a server), but instead of having a complete server waiting for the request, only an API gateway is required and the API gateway will instantiate the function or microservice needed to process the request.

If limited to that approach, serverless is just an evolution of IT architecture. However, by making the deployment of a function or microservice dynamic, serverless architecture also introduces the notion of fluid software since it is possible to decide where and when the function or microservice will be deployed. Therefore, based on conditions (derived from analytics), it will be possible to deploy the function or microservice closer to the request generator, which could be an edge node.

In this case, serverless architecture is a fundamental transformation since it breaks away from client-server architecture. The shift from legacy architecture will include the following considerations:

  • Solutions have to be microservice based.
  • There may not be a central component, or the central component may be limited to a discovery service.
  • A microservice may run on the same device the application making the request is running.
  • Microservices are inherently single-tenant and potentially single-user.


It is important to understand extreme decomposition since serverless implies microservices, which then means understanding clusters and cluster management, and then because of the fluidity of the solution it is important to understand extreme distribution: including edge-cloud which modifies the criteria and scope of the cloud-based cluster management (for clustering based on proximity or on user’s account). So technology like kubernetes for cluster management, and sidecar patterns like Istio or mimik edgeEngine are important to understand. It is also important to understand automated deployment, since non-human-driven deployment and SCM will be mandatory for the success of a serverless/microservice architecture.

The security protocols do not change. However, since serverless-microservice-based solutions are distributed, it is important not to depend on a central trust authority and use peer to peer token validation for API requests. But also not assume that the system’s components will be behind a firewall and that the network is untrustworthy. Finally, it is also important to handle multiple levels of security, since sensitive payload may go through relay microservices. For example, user information may go through a tunnel microservice and the call to the tunnel is protected by a token, but it is also necessary to protect the user information by avoiding the tunnel to be able to interpret the information itself.

In serverless-microservice-based architectures, each instance has to be stateless. Therefore the storage components are essential in storing states, as opposed to some legacy systems where the states are maintained by non-storage components. Based on the distributed nature of serverless-microservice-based systems, and due to theoretical limitations (CAP Theorem), the storage will most likely be BASE as opposed to ACID legacy storage. Clever partitioning strategies like explicit addressing and geocentric storage have to be put in place in order to cope with the eventual consistency of the system.

In serverless-microservice systems, the computing demand is mediated by the application itself, resulting in a closer fit between the allocated computing power and the used computing power. Due to the dynamic and fluid nature of the systems, it is also possible to offload the required computing power to other computing nodes (like edge devices or gateways) and thereby further optimize the allocation of cloud-like computing power.

edgeEngine Device Connectivity

Is there a need for at least one device to be connected to the internet to create the mesh network? Or will it work even if no devices are connected to the internet?

  1. The user installs the app using the platform-appropriate app store (e.g. Google Play Store, iOS App Store).
  2. The user registers with the app (meaning that edgeEngine will register the node ID under a specific user’s account ID).
  3. edgeEngine receives a valid token from our back-end services. The token has expiration time depends on the scope of service which could be varied from 24 hours to a couple of days or months.

From this point on, edgeEngine doesn’t need the internet to be available:

  1. edgeEngine uses the valid token to provide all functionality.
  2. Devices on the same Wi-Fi network can discover each other using edgeEngine.
  3. mimik edgeEngine container manager can instantiate any number of required microservices and use edgeEngine services.
  4. Microservices can communicate among each other, exchanging data.

Our product is deployed on AWS with multi-region configuration and it uses AWS Load balancer and auto scaling features. Other than that, all edgeEngine components are NodeJS and deployed using Ansible, which let us minimize the effort required to deploy on AWS. All deployments are done via Ansible, which can also be used for on-premise deployment with some modifications in Ansible’s script.

We are not using UDP or TCP hole punching as the primary P2P communication due to inconsistency in NAT traversal.

We use UDP multicast for local supernode discovery. For bootstrap registration and other communication, we use HTTPS; for tunneling to BEPs we use Secure WebSocket (WSS) for inbound communication (BEP TO NODE) and HTTPS for outbound communication (NODE TO BEP). In the future, we may consider UDP/TCP hole punching as a secondary mechanism.

edgeEngine Security

Edge contains 3 levels of security:

  1. Communication encryption (at edgeEngine level communication)
    When a node communicates with a supernode, the entire exchange is encrypted using the AES 128 GCM encryption algorithm.
  2. Payload encryption (at edgeEngine level communication)
    In the account cluster use case, the payload is encrypted using the AES 128 GCM encryption algorithm.
  3. Edge Access Token Authorization
    Registered apps must use edge access token to make an API call to edgeEngine.

Please Note: Any other level of security beyond the aforementioned levels need to be managed by the app developers.

For Example:

  • App to edge microservice communication security.
  • Edge microservice to edge microservice (link-local) communication security.

It can’t be used for a number of reasons, including:

  • HTTPS requires a signed certificate.
  • A signed certificate requires a valid and registered domain name.
  • Saving “certificate private key” on every single link-local node in a secure way is near impossible.


You can encrypt application payload by using any available off-the-shelf security algorithm (e.g. AES 128 GCM).

edgeEngine Network Configuration

Yes you can but we highly recommend that you don’t. The 20 second timeout has been deliberately designed this way as part of our edge-container quota management policy. This policy prevents a microservice from monopolizing the edge node’s entire CPU time.

Yes, but keep in mind that disabling the TCP delay risks causing network congestion.

Additional Questions?

The evolution of serverless architecture will make discovery service a key part of the system since one of the main issues will not about whether a service is running but rather where the service is or will be.

Another issue is about the maintainability/optimization of the system, since when a service is down or non-existent, it means that:

  • The service could not start.
  • The service went down because of a bug.
  • The script for deploying the service is faulty.
  • The data that is used to trigger the deployment of the service is wrong.
  • The inference engine that makes the decision to trigger the deployment is not trained properly.
  • It is ok for the service not to run.

Maintaining and debugging serverless-microservice based systems will have to be based on logs (it is impossible to put a breakpoint in a service that actually is not deployed yet) and deep analysis of these logs to identify anomalous patterns. Finally, optimization will be the key.

Need more help?

Feel free to contact us on Stack Overflow or send us a question via our support page if you have any questions.

Your feedback is important to us ~