It takes an ecosystem to park a car.
By Maen Suleiman and Gorka Garcia
Applications at the edge of the network require special technologies, such as efficient packet processing, machine learning and connectivity to the cloud.
The edge crosses a multitude of markets, including small business, industrial and enterprise, and it can include everything from devices you might expect to find at the edge to unusual concepts such as an automated parking lot.
At the heart of the parking lot is a community board powered by an ARMADA processor and AWS Greengrass software, which serves as an edge compute node. That node receives video streams from two cameras that are placed at the entry gate and exit of the parking lot. The node executes two Lambda functions and processes the incoming video streams, identifying the vehicles entering the garage through their license plates. It then checks whether the vehicles are authorized to enter the parking lot.
The first Lambda function runs Automatic License Plate Recognition (OpenALPR) software and it obtains the license plate number and delivers it together with the gate ID (Entry/Exit) to a Lambda function running on the AWS cloud, which accesses a DynamoDB database. The cloud Lambda function is responsible for reading the DynamoDB whitelist database and determines if the license plate belongs to an authorized car. This information is then sent back to a second Lambda function on the edge of the network, on the Marvell MACCHIATObin board, responsible for managing the parking lot capacity and opening or closing the gate. This Lambda function also logs the activity in the edge to the AWS Cloud Elasticsearch service, which works as a backend for Kibana, an open-source data visualization engine. From there, Kibana enables a remote operative to have direct access to information concerning parking lot occupancy, entry gate status and exit gate status. Furthermore, the AWS Cognito service authenticates users for access to Kibana.
After the AWS Cloud Lambda function sends the verdict (allowed/denied) to the second Lambda function running on the MACCHIATObin board, it communicates with the gate controller, which then can open/close the gateway as required.
This scenario showcases the capabilities to run a machine learning algorithm using AWS Lambda at the edge to make the identification process extremely fast. It’s made possible by the high performance, low-power Marvell multi-core processors. Those infrastructure processors’ capabilities have the potential to cover a range of higher-end networking and security applications that can benefit from the maturity of the Arm ecosystem and the ability to run machine learning in a multi-core environment at the edge of the network.
For information on how to enable AWS Greengrass on Marvell community boards, information is available here and here.
Gorka Garcia is a senior lead engineer at Marvell.
what latency in ms adds server request and reply?