Brainy Pi

Available to select audience (currently beta)

AWS SageMaker Neo is an Amazon Web Services (AWS) service that optimizes machine learning models for edge devices. This is beneficial because it compiles, quantizes, and optimizes models for efficient execution on specific hardware, reducing both latency and power consumption. So, developers can leverage popular deep learning frameworks like TensorFlow and PyTorch, which are supported by SageMaker Neo. This makes it easy to deploy optimized models on edge devices such as BrainyPi. Consequently, developers can harness the power of machine learning for real-time edge computing applications. As a result, efficient execution of models on edge devices becomes possible, thereby bringing machine learning capabilities to the edge. Lets explore AWS SageMaker on Brainy Pi !
BrainyPi, a popular single-board computer designed for industrial AI and machine learning applications, is the perfect platform for harnessing the power of machine learning on the edge. In this blog post, we will walk you through the process of performing inference on an AWS SageMaker. By doing so, you will be able to bring the capabilities of machine learning to your edge devices. This is a significant advantage because it allows you to leverage the full potential of BrainyPi and explore new possibilities for real-time AI applications. Additionally, the integration of AWS SageMaker with BrainyPi ensures seamless and efficient execution of machine learning models, enabling you to unlock innovative solutions in various industries.

Prerequisites:

Before we dive into the details, make sure you have the following prerequisites in place:
  1. An AWS SageMaker trained machine learning model: You should have a trained machine learning model using AWS SageMaker. This model should be saved in a format compatible with BrainyPi, such as Kera, TensorFlow, TFlite, PyTorch or ONNX/MXNet. See the links below
    1. Training model in AWS SageMaker
    2. Preparing model for Compilation
    For example purposes in this blog we will be using a per-trained object detection model coco_ssd_mobilenet_v1_1.0_quant_2018_06_29 trained on the coco dataset.
  2. A BrainyPi board: You should have a BrainyPi board, with Rbian OS installed. You can refer to the BrainyPi documentation for instructions on setting up your board.
  3. AWS account: You should have AWS account.

Setup AWS Credentials

  1. Create an Administrator User for the tutorial
    1. Go to AWS console, Search for IAM.

      Screenshot_20230411_220018

    2. Go to Users , Click on Add user

      Screenshot_20230411_220027

    3. In “Set Permissions”, Choose option “Attach policies directly”

      Screenshot_20230411_220128

    4. Search for the “AdministratorAccess” policy, select the “AdministratorAccess” policy

      Screenshot_20230411_220132

    5. Create user

      Screenshot_20230411_220139

  2. Generate Access Keys for the user
    1. Click on the User that you created.

      Screenshot_20230410_161538

    2. Select “Security credentials” Tab

      Screenshot_20230410_161543

    3. Scroll down and Click on “Create access key”

      Screenshot_20230410_161549

    4. Choose the option “Other”.

      Screenshot_20230410_161639

    5. Click on “Create access key”

      Screenshot_20230410_161649

    6. Download the csv file

      Screenshot_20230410_161657

  3. The CSV file contains 2 credentials
    1. Access Key ID
    2. Access Key Secret
  4. Open up a termail on BrainyPi, Run the commands
    mkdir -p ~/.aws/ && touch ~/.aws/credentials
    
    cat << EOF > ~/.aws/credentials
    
    [default]
    aws_access_key_id = YOUR_ACCESS_KEY
    aws_secret_access_key = YOUR_SECRET_KEY
    EOF
    This will setup AWS credentials on your Brainy Pi.

Compile model for BrainyPi using AWS Sagemaker

As an example, we will compile the coco_ssd_mobilenet_v1_1.0_quant_2018_06_29 object detection model for BrainyPi, this is a TFlite model.
  1. Install dependencies
    sudo apt-get install python3-pip python3-matplotlib python3-scipy python3-opencv
    pip install boto3 numpy
    sudo pip3 install jupyter
  2. For ease of use the steps for compiling the model for brainypi have be converted to jupyter notebook, Clone the repository containing the notebook
    git clone https://github.com/brainypi/brainypi-aws-sagemaker-example.git
  3. Then, Open Jupyter Notebook, by running command
    jupyter-notebook
  4. The notebook will open in the browser, Navigate to the brainypi-aws-sagemaker-example folder and open aws-sagemaker-inference.ipynb file.

    2023-04-11-165057_1280x720_scrot

  5. So, Change the AWS_REGION variable in the first cell.
    Note: If you have previously run the notebook, then you may have to also change these variables
    1. role_name in cell 1
    2. bucket in cell 3
  6. Notebook will open, Click on “Cell” and choose “Run All”

    2023-04-11-165150_1280x720_scrot

  7. This will compile the model for BrainyPi, and will run an sample inference program to check if the compilation is success.

    2023-04-11-121847_1280x720_scrot

  8. The compiled model will be downloaded to the folder brainypi-aws-sagemaker-example named compiled-detect.tar.gz
  9. This compiled model can be copied to other brainypi devices and can be re-used without connecting to AWS or without using AWS resources.

Inference compiled model without connecting to AWS

To inference the model we need to write inference code for the model, using the DLR library. As an example, we will be using ready to use inference code for the coco_ssd_mobilenet_v1_1.0_quant_2018_06_29 object detection model written for BrainyPi.
  1. Navigate to brainypi-aws-sagemaker-example git repository,
  2. Check if the folder dlr_model exists, If it does not exist in the folder then
    1. Copy the model file compiled-detect.tar.gz
    2. Extract the model
      mkdir ./dlr_model
      tar -xzvf ./compiled-detect.tar.gz --directory ./dlr_model
  3. Now, run the command
    curl https://farm9.staticflickr.com/8463/8132484846_8ce4da18ba_z.jpg --output input_image.jpg
    python3 object-detection-example.py
    This will run the coco_ssd_mobilenet_v1_1.0_quant_2018_06_29 object detection on the input image input_image.jpg and show the output.

    2023-04-11-121847_1280x720_scrot

  4. See the full code here – https://github.com/brainypi/brainypi-aws-sagemaker-example/blob/main/object-detection-example.py

Conclusion

In conclusion, AWS SageMaker Neo is a powerful and cost-effective solution for optimizing and deploying machine learning models on edge devices like BrainyPi. This is because developers only need to incur the one-time cost of model training. So, thereafter, they can perform inference on the edge without the need for internet connectivity, connecting to AWS, sending data to AWS, or paying for inference. This eliminates ongoing inference costs and reduces reliance on constant internet connectivity, making SageMaker Neo a cost-efficient option for deploying machine learning models in edge computing scenarios. By leveraging SageMaker Neo, developers can unlock the full potential of edge computing, creating intelligent and responsive applications. Additionally, they can achieve this without incurring ongoing inference costs or relying on constant internet connectivity, resulting in a more efficient and independent edge computing environment.
0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*