{"id":404,"date":"2024-01-31T10:04:56","date_gmt":"2024-01-31T09:04:56","guid":{"rendered":"https:\/\/sii.ua\/blog\/?p=404"},"modified":"2024-02-16T18:33:04","modified_gmt":"2024-02-16T17:33:04","slug":"deploying-custom-models-on-aws-sagemaker-using-fastapi","status":"publish","type":"post","link":"https:\/\/sii.ua\/blog\/en\/deploying-custom-models-on-aws-sagemaker-using-fastapi\/","title":{"rendered":"Deploying custom models on AWS Sagemaker using FastAPI"},"content":{"rendered":"\n<p id=\"50b4\">Although AWS SageMaker is a great platform that organizes and greatly simplifies all data science-related activities, the first experience of deploying custom machine learning models can be cumbersome. Fortunately, we\u2019ve been down this road many times, and we\u2019ll guide you through your first deployment, avoiding all the potential pitfalls.<\/p>\n\n\n\n<p id=\"ee29\">AWS SageMaker is a powerful tool for developing and deploying machine learning models. It makes it possible to&nbsp;<strong>scale models,&nbsp;<\/strong>develop them with a&nbsp;<strong>variety of frameworks,<\/strong>&nbsp;and&nbsp;<strong>integrate them with other AWS services<em>&nbsp;<\/em><\/strong>such as model monitoring<strong>.&nbsp;<\/strong>However, its versatility and scalability come at a price \u2014 it&nbsp;<strong>requires extensive knowledge<\/strong>&nbsp;of the platform making it hard for beginners to get started. Things get even more complex when you try to deploy custom models, especially those developed outside the platform.<\/p>\n\n\n\n<p id=\"b8f7\">The article is here to help you with the above. <strong>We are going to show how you can easily deploy custom ML models using Docker, FastAPI, and AWS Sagemaker in 9 simple steps:<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Train the model wherever it suits you.<\/li>\n\n\n\n<li>Dockerize the inference code.<\/li>\n\n\n\n<li>Push the Docker image to Amazon Elastic Container Registry (ECR).<\/li>\n\n\n\n<li>Save the model artifacts on S3 bucket.<\/li>\n\n\n\n<li>Create an Amazon SageMaker model that refers to the Docker image in ECR.<\/li>\n\n\n\n<li>Create an Amazon SageMaker endpoint configuration that specifies the model and the resources to be used for inference using the AWS SDK (boto3).<\/li>\n\n\n\n<li>Create an endpoint configuration.<\/li>\n\n\n\n<li>Deploy your model.<\/li>\n\n\n\n<li>Use Boto3 to test your inference.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Train the model locally or in a cloud-based environment<\/strong><\/h2>\n\n\n\n<p>For the sake of simplicity, we are going to use the pre-trained HuggingFace model, a fine-tuned BERT model that is ready to use for Named Entity Recognition. It recognizes four types of entities: Locations (LOC), Organisations (ORG), Persons (PER), and Miscellaneous (MISC). In reality, this could be any model you\u2019d like to deploy.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Dockerize the inference code<\/strong><\/h2>\n\n\n\n<p>First of all, you need to create a specific directory and file structure (see below) within your working directory.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/1-10-1.png\"><img decoding=\"async\" width=\"664\" height=\"284\" src=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/1-10-1.png\" alt=\"Structure of Docker image\" class=\"wp-image-405\" srcset=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/1-10-1.png 664w, https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/1-10-1-300x128.png 300w\" sizes=\"(max-width: 664px) 100vw, 664px\" \/><\/a><figcaption class=\"wp-element-caption\">Fig. 1 Structure of Docker image<\/figcaption><\/figure>\n\n\n\n<p>Directory&nbsp;<strong>opt\/&nbsp;<\/strong>will be your root. The&nbsp;<em>main.py<\/em>&nbsp;should contain the inference code, that reads the model from a specific path:&nbsp;<strong>ml\/model&nbsp;<\/strong>and exposes REST API for performing predictions \u2014 we are using FastAPI which is a high-performance web framework for developing APIs with Python. Let\u2019s take a look at the script, especially the&nbsp;<em>invocations&nbsp;<\/em>and&nbsp;<em>ping<\/em>&nbsp;entrypoints that are required by AWS SageMaker.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# main.py\nfrom fastapi import FastAPI, Request\nfrom transformers import pipeline\n\nMODELS_PATH = &quot;ml\/model\/&quot;\n\napp = FastAPI()\n\n\n@app.get(&#039;\/ping&#039;)\nasync def ping():\n    return {&quot;message&quot;: &quot;ok&quot;}\n\n@app.on_event(&#039;startup&#039;)\ndef load_model():\n    classifier = pipeline(&quot;ner&quot;, model=MODElS_PATHA)\n    logging.info(&quot;Model loaded.&quot;)\n    return classifier\n\n@app.post(&#039;\/invocations&#039;)\ndef invocations(request: Request):\n    json_payload = await request.json()\n\n    inputs = &#x5B;records&#x5B;&quot;scope&quot;] for records in json_payload]\n    output = &#x5B;{&quot;prediction&quot;: classifier(input)} for input in inputs ]\n    return output\n<\/pre><\/div>\n\n\n<ul class=\"wp-block-list\">\n<li><em>ping()&nbsp;<\/em>is used by AWS SageMaker to verify your model works.<\/li>\n\n\n\n<li><em>load_model()&nbsp;<\/em>is an event handler and runs on service \u2018startup\u2019. It means that this function will be executed once and only the application starts up. It\u2019s a great place to load the model into memory and this is what we do.<\/li>\n\n\n\n<li><em>invocations()<\/em>&nbsp;is a REST POST entrypoint and here you should put your inference code. It means all what is required to perform the predictions by your model. It uses the await keyword to wait for other asynchronous functions to complete before continuing with their execution. In this case, <a aria-label=\"when an asynchronous function is called, it doesn\u2019t block the execution until it returns a result (opens in a new tab)\" href=\"https:\/\/fastapi.tiangolo.com\/async\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"ek-link\" rel=\"nofollow\" >when an asynchronous function is called, it doesn\u2019t block the execution until it returns a result<\/a>.<\/li>\n<\/ul>\n\n\n\n<p>Then we have Dockerfile. It executes several command-line instructions step by step and creates a container based on the indicated image and requirements.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# Dockerfile\nFROM python:3.8-slim\n\n# copy the code and requirements\nCOPY main.py \/opt\/\nCOPY requirements.txt \/opt\/\nCOPY run_server.sh \/opt\/serve\n\nWORKDIR \/opt\n\n# install required python stuff\nRUN pip install -r requirements.txt\n\n# setup the executable path\nENV PATH=&quot;\/opt\/:${PATH}&quot;\n<\/pre><\/div>\n\n\n<p>Requirements.txt indicates what packages are required and to be installed while creating the docker image.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# requirements.txt\nfastapi==0.89.1\nuvicorn==0.20.0\ntransformers==4.25.1\n<\/pre><\/div>\n\n\n<p>Finally, there is&nbsp;<em>run_server.sh. It&nbsp;<\/em>starts the machine learning service hosting your model and inference code. The host parameter specifies the IP address the server will listen to, and the port parameter specifies the port number. Don\u2019t modify them, as they come as required by SageMaker.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# run_server.sh\nuvicorn main:app --proxy-headers --host 0.0.0.0 --port 8080\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\">Push the Docker image to Amazon Elastic Container Registry (ECR)<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>At first create a private repository in Amazon Elastic Container Registry where your image will be stored. <a href=\"https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/repository-create.html\" target=\"_blank\" aria-label=\"Please follow the guideline to do so (opens in a new tab)\" rel=\"noreferrer noopener\" class=\"ek-link\" rel=\"nofollow\" >Please follow the guideline to do so<\/a>.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Make sure that you have the latest version of the AWS CLI and Docker installed.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/2-7-1.png\"><img decoding=\"async\" width=\"875\" height=\"278\" src=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/2-7-1.png\" alt=\"Amazon ECR - your repository name\" class=\"wp-image-407\" srcset=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/2-7-1.png 875w, https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/2-7-1-300x95.png 300w, https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/2-7-1-768x244.png 768w\" sizes=\"(max-width: 875px) 100vw, 875px\" \/><\/a><figcaption class=\"wp-element-caption\">Fig. 2 Amazon ECR &#8211; your repository name<\/figcaption><\/figure>\n\n\n\n<p>Go to the created repository and click on the&nbsp;<strong><em>View push commands<\/em>&nbsp;<\/strong>button to view the steps to push an image to your new repository. Make sure you are in the location of the&nbsp;<code>.Dockerfile<\/code>&nbsp;and run the commands one by one.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\naws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com\ndocker build -t your_repository_name .\ndocker tag your_repository_name:latest XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com\/your_repository_name:latest\ndocker push XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com\/your_repository_name:latest\n<\/pre><\/div>\n\n\n<p>If all goes well, you should be able to see your ECR image on AWS.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Save model artifacts on S3 bucket<\/strong><\/h2>\n\n\n\n<p>Models must be packaged as compressed tar files (<code>*.tar.gz<\/code>) and saved on the S3 bucket. Let\u2019s say your model\u2019s directory looks like this:<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/3-6-768x508-2.png\"><img decoding=\"async\" width=\"768\" height=\"508\" src=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/3-6-768x508-2.png\" alt=\"Model's directory\" class=\"wp-image-409\" srcset=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/3-6-768x508-2.png 768w, https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/3-6-768x508-2-300x198.png 300w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/><\/a><figcaption class=\"wp-element-caption\">Fig. 3 Model&#8217;s directory<\/figcaption><\/figure>\n\n\n\n<p>To compress the&nbsp;<code>model<\/code> file make sure that you are in&nbsp;<strong>ml\/<\/strong>&nbsp;location and run the following commands:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ncd opt\/ml\/\ntar -czvf model.tar.gz model\n<\/pre><\/div>\n\n\n<p>Upload it to your desired S3 location.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/4-5-768x593-3.png\"><img decoding=\"async\" width=\"768\" height=\"593\" src=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/4-5-768x593-3.png\" alt=\"Your s bucket name\" class=\"wp-image-411\" srcset=\"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/4-5-768x593-3.png 768w, https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/4-5-768x593-3-300x232.png 300w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/><\/a><figcaption class=\"wp-element-caption\">Fig. 4 Your s bucket name<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Create an Amazon SageMaker model resource that refers to the Docker image in ECR<\/strong><\/h2>\n\n\n\n<p id=\"85b4\">There are several options to deploy a model using SageMaker hosting services. You can programmatically deploy a model using an AWS SDK (for example, the SDK for Python (Boto3)), the SageMaker Python SDK, and the AWS CLI, or you can interactively create a model with the SageMaker console. This article presents the first of these options- SDK for Python (Boto3). You can run the following commands locally.<\/p>\n\n\n\n<p id=\"777f\">Set the values for the execution role, image URI, model URL, and model name. Call&nbsp;<code>sagemaker.create_model()<\/code>&nbsp;to create the model resource.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# create_aws_model.py\nimport boto3\n\n# Create a SageMaker session\nsagemaker = boto3.Session().client(&#039;sagemaker&#039;)\niam = boto3.client(&#039;iam&#039;)\n\nROLE = iam.get_role(RoleName=&#039;AWS_ROLE_NAME&#039;)&#x5B;&#039;Role&#039;]&#x5B;&#039;Arn&#039;]\nIMAGE = &quot;XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com\/your_repository_name:latest&quot;\nMODEL_URL = &quot;s3:\/\/your-s-bucket-name\/model.tar.gz&quot;\nMODEL_NAME = &quot;model-name&quot;\n\n\n# Create a model\nsagemaker.create_model(\n     ModelName=MODEL_NAME,\n     ExecutionRoleArn=ROLE,\n     PrimaryContainer={\n     &#039;Image&#039;: IMAGE,\n     &#039;ModelDataUrl&#039;: MODEL_URL\n     }\n)\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\"><strong>Create an Amazon SageMaker endpoint configuration<\/strong><\/h2>\n\n\n\n<p>The next step is to create an Amazon SageMaker endpoint configuration that specifies the model and the resources to be used for inference.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# Create an endpoint configuration\nendpoint_config_name = &#039;endpoint-config-name&#039;\nsagemaker.create_endpoint_config(\n    EndpointConfigName=endpoint_config_name,\n    ProductionVariants=&#x5B;{\n        &#039;InstanceType&#039;: &#039;ml.t2.medium&#039;,\n        &#039;InitialInstanceCount&#039;: 1,\n        &#039;ModelName&#039;: MODEL_NAME,\n        &#039;VariantName&#039;: &#039;Variant-1&#039;\n    }]\n)\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\"><strong>Deploy your model<\/strong><\/h2>\n\n\n\n<p>Finally, you are ready to deploy your model \u2014 that is, create an entry point that hosts your model. Use the endpoint configuration specified above.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# Create an endpoint\nendpoint_name = &#039;endpoint-name&#039;\nsagemaker.create_endpoint(\n    EndpointName=endpoint_name,\n    EndpointConfigName=endpoint_config_name\n)\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\"><strong>Use Boto3 to test your inference<\/strong><\/h2>\n\n\n\n<p>Now for the final step. To make sure everything went well, create a test input and run it through the model. If you get an error, the AWS CloudWatch logs are a good place to debug.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nimport json\nimport boto3\nimport sagemaker\nfrom sagemaker import get_execution_role\n\nrole = get_execution_role()\nsagemaker = boto3.client(&quot;runtime.sagemaker&quot;)\n\ntest_input = &#x5B;\n    {\n        &quot;document_id&quot;: &quot;1&quot;,\n        &quot;scope&quot;: &quot;Ron is Harry&#039;s best friend&quot;,\n    },\n    {\n        &quot;document_id&quot;: &quot;2&quot;,\n        &quot;scope&quot;: &quot;Hermione was the best in her class&quot;,\n    },\n]\n\n\n\npayload = json.dumps(test_input)\n\nresponse = sagemaker.invoke_endpoint(EndpointName=endpoint_name,\n                                         ContentType=&#039;application\/json&#039;,\n                                         Body=payload)\n\nresult = json.loads(response&#x5B;&quot;Body&quot;].read().decode())\nprint(result)\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n&gt;&gt; &#x5B;{&#039;prediction&#039;: &#x5B;{&#039;entity&#039;: &#039;I-PER&#039;, &#039;score&#039;: 0.9971058, &#039;index&#039;: 1, &#039;word&#039;: &#039;Ron&#039;, &#039;start&#039;: 0, &#039;end&#039;: 3}, {&#039;entity&#039;: &#039;I-PER&#039;, &#039;score&#039;: 0.9923815, &#039;index&#039;: 3, &#039;word&#039;: &#039;Harry&#039;, &#039;start&#039;: 7, &#039;end&#039;: 12}]}, {&#039;prediction&#039;: &#x5B;{&#039;entity&#039;: &#039;I-PER&#039;, &#039;score&#039;: 0.99259263, &#039;index&#039;: 1, &#039;word&#039;: &#039;Her&#039;, &#039;start&#039;: 0, &#039;end&#039;: 3}, {&#039;entity&#039;: &#039;I-PER&#039;, &#039;score&#039;: 0.9645591, &#039;index&#039;: 2, &#039;word&#039;: &#039;##mio&#039;, &#039;start&#039;: 3, &#039;end&#039;: 6}, {&#039;entity&#039;: &#039;I-PER&#039;, &#039;score&#039;: 0.9782252, &#039;index&#039;: 3, &#039;word&#039;: &#039;##ne&#039;, &#039;start&#039;: 6, &#039;end&#039;: 8}]}]\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\"><strong>Summary<\/strong><\/h2>\n\n\n\n<p id=\"1a39\">We have provided you with 9 simple steps on how to deploy custom models on AWS Sagemaker using FastAPI. This allows for greater flexibility and makes implementation a piece of cake.<\/p>\n\n\n\n<p id=\"b355\">Make sure that after deployment your model is operating correctly and efficiently. To quickly identify and address any issues in the production system, you might want to use model monitoring tools that are integrated into the AWS ecosystem. AWS Sagemaker provides CloudWatch Logs or CloudWatch Metrics to trigger alarms when certain thresholds are exceeded. By regularly updating your models and incorporating new data, you can improve their accuracy and ensure that they continue to provide value over time.<\/p>\n\n\n\n<p>***<br><a href=\"https:\/\/github.com\/kcepinska\/custom_model_aws\" target=\"_blank\" aria-label=\"You can find the code here (opens in a new tab)\" rel=\"noreferrer noopener\" class=\"ek-link\" rel=\"nofollow\" >You can find the code here<\/a>.<\/p>\n\n\n<div class=\"kk-star-ratings kksr-auto kksr-align-left kksr-valign-bottom\"\n    data-payload='{&quot;align&quot;:&quot;left&quot;,&quot;id&quot;:&quot;404&quot;,&quot;slug&quot;:&quot;default&quot;,&quot;valign&quot;:&quot;bottom&quot;,&quot;ignore&quot;:&quot;&quot;,&quot;reference&quot;:&quot;auto&quot;,&quot;class&quot;:&quot;&quot;,&quot;count&quot;:&quot;0&quot;,&quot;legendonly&quot;:&quot;&quot;,&quot;readonly&quot;:&quot;&quot;,&quot;score&quot;:&quot;0&quot;,&quot;starsonly&quot;:&quot;&quot;,&quot;best&quot;:&quot;5&quot;,&quot;gap&quot;:&quot;2&quot;,&quot;greet&quot;:&quot;&quot;,&quot;legend&quot;:&quot;0\\\/5&quot;,&quot;size&quot;:&quot;30&quot;,&quot;title&quot;:&quot;Deploying custom models on AWS Sagemaker using FastAPI&quot;,&quot;width&quot;:&quot;0&quot;,&quot;_legend&quot;:&quot;{score}\\\/5&quot;,&quot;font_factor&quot;:&quot;1.25&quot;}'>\n            \n<div class=\"kksr-stars\">\n    \n<div class=\"kksr-stars-inactive\">\n            <div class=\"kksr-star\" data-star=\"1\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"2\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"3\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"4\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"5\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n    <\/div>\n    \n<div class=\"kksr-stars-active\" style=\"width: 0px;\">\n            <div class=\"kksr-star\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 2px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 30px; height: 30px;\"><\/div>\n        <\/div>\n    <\/div>\n<\/div>\n                \n\n<div class=\"kksr-legend\" style=\"font-size: 24px;\">\n            <span class=\"kksr-muted\"><\/span>\n    <\/div>\n    <\/div>\n","protected":false},"excerpt":{"rendered":"<p>Although AWS SageMaker is a great platform that organizes and greatly simplifies all data science-related activities, the first experience of &hellip; <a class=\"continued-btn\" href=\"https:\/\/sii.ua\/blog\/en\/deploying-custom-models-on-aws-sagemaker-using-fastapi\/\">Continued<\/a><\/p>\n","protected":false},"author":19,"featured_media":413,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_editorskit_title_hidden":false,"_editorskit_reading_time":0,"_editorskit_is_block_options_detached":false,"_editorskit_block_options_position":"{}","inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[86,88,87],"class_list":["post-404","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-hard-development","tag-aws-sagemaker","tag-docker","tag-fastapi"],"acf":[],"aioseo_notices":[],"featured_media_url":"https:\/\/sii.ua\/blog\/wp-content\/uploads\/2024\/01\/Deploying-custom-models-on-AWS-Sagemaker-using-FastAPI-1.jpg","category_names":["Hard development"],"_links":{"self":[{"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/posts\/404"}],"collection":[{"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/comments?post=404"}],"version-history":[{"count":4,"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/posts\/404\/revisions"}],"predecessor-version":[{"id":816,"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/posts\/404\/revisions\/816"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/media\/413"}],"wp:attachment":[{"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/media?parent=404"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/categories?post=404"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sii.ua\/blog\/en\/wp-json\/wp\/v2\/tags?post=404"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}