Andrejus Baranovski

Subscribe to Andrejus Baranovski feed
Blog about Oracle, Full Stack, Machine Learning and Cloud
Updated: 4 hours 15 sec ago

OpenAI GPT-3 API Overview

Sun, 2021-11-21 14:16
GPT-3 API review. I walk through a few examples and show how it works in OpenAI's playground. You will see how GPT-3 generates SQL statement from natural text, how it creates an outline for the essay, and generates recipe directions from food ingredients. There is an option to use GPT-3 API in your applications through REST interface.


Scale FastAPI on Kubernetes Pod

Sun, 2021-11-14 13:55
This video shows how to scale FastAPI REST endpoint running on Kubernetes Pod.


Python Numpy Array in ML Services

Sun, 2021-11-07 14:04
It can be not obvious how to pass Numpy array across separate services when running ML infra in separate containers. When data preparation service runs in a different container than training service. In this tutorial, I show how to convert Numpy to list and JSON to be able to send it through RabbitMQ message broker and consume it on the receiver side.


BIY Workflow with FastAPI, Python and Skipper

Mon, 2021-11-01 04:13
Build It Yourself. In this video, I explain how you can build workflow running with FastAPI REST generic endpoints. The best thing about it, Skipper architecture is modular and workflow runs in a separate Docker container. This means you can replace it with your own implementation if needed. I explain how workflow call is integrated into FastAPI logic and how the call is made to get the queue name from the workflow. Using this queue name and RabbitMQ message broker, event-based communication runs between containers.


MLOps: Extend Skipper ML Services

Mon, 2021-10-25 06:44
The goal of this video is to explain Skipper from MLOps user perspective, different blocks of Skipper and how they fit together. I show how a sample set of ML services works and how you could replace it or add your own service. Skipper engine is implemented with Python, but you could add service container implemented in any language. All runs on Kubernetes.


Running Kubernetes on Oracle Cloud OCI

Mon, 2021-10-18 03:42
Oracle Cloud OCI provides a good environment to run your Kubernetes workloads. In this video, I show how to access Kubernetes cluster in OCI, explain artifacts related to the cluster. I show how Skipper API runs on Kubernetes deployed on OCI. Cluster runtime is accessed through cloud shell.


MLOps: Scaling TensorFlow Model on Kubernetes

Mon, 2021-10-11 03:04
ML model serving/prediction API can be scaled on Kubernetes by adding or removing Pod instances. I show you a live demo and explain how scaling can be done for TensorFlow model running on Kubernetes.

MLOps: Sharing Model Across Services

Sun, 2021-10-03 06:16
Typically you would want to scale ML model training and ML inference/prediction services separately. This means services should not share the file storage, at least this is true for the Kubernetes cluster environment. I explain how you could transfer files across services using RabbitMQ.


MLOps with TensorFlow and Kubernetes Powered by RabbitMQ and FastAPI

Sun, 2021-09-26 06:26
I show how to run TensorFlow model training and data processing containers in a single Pod in Kubernetes. Model training container runs as a main container and data processing as a sidecar. Running both containers in a single Pod, allow to share files in common storage. I'm using persistence volume to store TensorFlow model and stats data from the data processing container. This video shows a walk through the complete use case of data processing, model training and ML microservices communication.


RabbitMQ on Kubernetes in Skipper

Mon, 2021-09-20 07:51
RabbitMQ works great for event-based microservices implementation. Katana ML Skipper is our open source product, it helps to run workflows and connect multiple services, we are specifically specialized for ML workflows. In this video, I explain how we integrated RabbitMQ and how we run it on Kubernetes cluster. I believe this can be helpful if you are researching how to run RabbitMQ on Kubernetes cluster for your own use cases.


FastAPI on Kubernetes with NGINX Ingress

Mon, 2021-09-13 03:29
A simple tutorial about a complex thing - how to expose FastAPI app to the world on Kubernets with NGINX Ingress Controller. I explain the structure of Kubernetes Pod for FastAPI along with Kubernetes service. I show how FastAPI properties should be set to be accessible through Ingress path definition. You will learn how to check the log for NGINX Ingress Controller and FastAPI Pod.


TensorFlow.js Setup for React JS App (Manning liveProject)

Mon, 2021-08-30 05:58
I explain the structure of my liveProject. It is a series of five projects, I use the first one as an example (it is free). React is highly prized by developers for its ease of building simple and intuitive frontends. liveProject teaches how to use Machine Learning directly within React code and run it in the browser. After working on this liveProject, you will learn how to run PoseNet model, use data collected by PoseNet to train your own custom ML model with TensorFlow.js. React application will help to track physical workout movements, classify them and count statistics.


Routing Traffic Between FastAPI Pods in Kubernetes

Mon, 2021-08-23 06:01
This is a quick tutorial to show how to route traffic between Kubernetes Pods. Both Pods are running FastAPI endpoints. I show how to create Deployment and Service elements for Kubernetes Pod, and how to refer to that service from another Pod to execute HTTP call.


Human Pose Estimation with TensorFlow.js and React

Thu, 2021-08-19 14:02
Want to learn #MachineLearning and #React by doing? My @ManningBooks liveProject 'Human Pose Estimation with TensorFlow.js and React' is published. Free access to Manning books is included. Try it here.

FastAPI Running on Kubernetes Pod

Mon, 2021-08-09 06:29
Step-by-step tutorial where I explain and show how to run FastAPI app on Kubernetes Pod. I keep it simple. I explain when it makes sense to use multiple containers in a single Pod and when you should put containers into different Pods.


Dockerfile and Docker Compose Tutorial for MLOps

Mon, 2021-08-02 06:19
This is the tutorial, where I talk about MLOps, explain the difference between Dockerfile and Docker Compose YML definition file. I briefly explain what you should be aware of if planning to move your solution to Kubernetes in the future. I explain in simple words, what is Dockerfile and when Docker Compose is useful. The sample service is based on TensorFlow functionality, where we call model predict function to process serving request.


Hugging Face Course and Pretrained Model Fine-Tuning

Mon, 2021-07-26 02:52
Hugging Face team recently released an online course about transformers, pretrained model fine-tuning, and sharing models on the Hugging Face hub. I went through the first part of the course related to model fine-tuning. I explain what changes I did for my previous sample related to Hugging Face model fine-tuning, based on knowledge learned from this course.


Serving ML Model with Docker, RabbitMQ, FastAPI and Nginx

Mon, 2021-07-19 01:53
In this tutorial I explain how to serve ML model using such tools as Docker, RabbitMQ, FastAPI and Nginx. The solution is based on our open-source product Katana ML Skipper (or just Skipper). It allows running ML workflow using a group of microservices. It is not limited to ML, you can run any workload using Skipper and plugin your own services. You can reach out to me if you got any questions.


TensorFlow Decision Forests Example

Wed, 2021-07-14 06:32
With TensorFlow Decision Forests we can handle structured data without much preprocessing. There is no need to normalize numeric values, one-hot encode categorical values, or set magic values to replace missing data. I give it a try and run this new feature of TensorFlow in this video. The demo is based on the Titanic dataset taken from Kaggle.


FastAPI Background Tasks for Non-Blocking Endpoints

Mon, 2021-06-28 06:34
With FastAPI it is super easy to implement a non-blocking endpoint. This is useful when the endpoint calls logic, which should be executed asynchronously and you don't need to wait for the result, but want to return a response immediately. For example - a service that does logging. We don't want to wait until the log will be written but return the response instantly.