Simplifying Machine Learning with AWS Sagemaker

Posted By :Mohit Bansal |18th March 2020

 

With the advent of artificial intelligence (AI), global technology players such as Amazon, IBM, and Google are empowering businesses to build dynamic cloud machine learning solutions. In this article, we take a closer look at how businesses can deploy machine learning with AWS Sagemaker using a comprehensive guide shared by the AI team at Oodles. 

 

About Sagemaker

 

Sagemaker is a fully managed service by AWS that provides the ability to build, train, and deploy machine learning models quickly by simplifying the hard and time taking steps of a Machine Learning process. The platform is a part of the huge umbrella of applications within artificial intelligence services. We, at Oodles AI, have hands-on experience in deploying Amazon cloud products to offer AWS consulting services and build enterprise-grade machine learning applications. 

 

Machine Learning Steps

- Build

- Train 

- Deploy 

 

Using high-level python SDK of Sagemaker, all of these steps become really easy using the minimum amount of code.

 

Build

Before training a model, we need to initialize and create a model. Creating a model is normally a tiring process but using Sagemaker it is really easy as it provides inbuilt commonly used models. For example, if we need to create an Xgboost model, it is super easy using Sagemaker SDK.

 

from sagemaker.amazon.amazon_estimator import get_image_uri

container = get_image_uri(boto3.Session().region_name, 'xgboost', '0.90-1')

xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m4.xlarge', output_path='s3://{}/models'.format(bucket), sagemaker_session=sess)

 

In the above code snippet, first, we mention the container image for our algorithm i.e. xgboost.

Then we build our model using that container and some other parameters like:

 - train_instance_type: it specifies the RAM, space, speed, GPU usage of the training machine.

-  output path: it species the s3 location where our built model will be stored.

 

Train

Once we have built our sagemaker model, we have to train the model using training data. 

 

s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/train_file.csv'.format(bucket,folder), content_type='csv')

Using above snippet, it is really easy to create an input train of data by providing s3 path of the data. This input train of data is used to train the model:

xgb.fit({'train': s3_input_train})

This trains and saves the model to s3.

 

Deploy

Deploying a sagemaker model is as easy as building and training. In the following snippet, high-level sagemaker SDK is used to deploy a model. We can provide the path to our model saved in s3 which is the model_uri. Then we have to set the endpoint name and instance type and count.

 

model = sagemaker.model.Model(model_uri,

                                       image=get_image_uri(sess.boto_region_name, 'xgboost-neo', repo_version='latest'),

                                       role=amazon_sagemaker_execution_role)

model.name = 'deployed-xgboost-wellsite'

model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')

 

This sets our endpoint running. This endpoint can now take input data to create predictions.

 

Resources:

For further details, visit https://aws.amazon.com/sagemaker

Sagemaker python high level sdk documentation: https://sagemaker.readthedocs.io/en/stable/

 


About Author

Mohit Bansal

He is tech enthusiast and always ready to learn new things. He has good skill in Python language.

Request For Proposal

[contact-form-7 404 "Not Found"]

Ready to innovate ? Let's get in touch

Chat With Us