An example of using Redis + RedisAI for a microservice that predicts consumer loan probabilities using Redis as a feature and model store and RedisAI as an inference server.
-
Updated
Jan 13, 2023 - Jupyter Notebook
8000
An example of using Redis + RedisAI for a microservice that predicts consumer loan probabilities using Redis as a feature and model store and RedisAI as an inference server.
Different ways of implementing an API to serve an image classification model
Run your own production inference code with Sagemaker
Basic MLPlatform includes Model Registry and Inference Server
Serve pytorch inference requests using batching with redis for faster performance.
This repository contains a project for detecting flags in images using the YOLOv7 object detection model. Designed for applications in events, sports, and public gatherings, this project enables accurate flag localization through advanced deep learning.
Bundle of Repositories that power up all the Crop Prediction Applications
An AI-powered mobile crop advisory app for farmers, gardeners that can provide information about crops using an image taken by the user. This supports 10 crops and 37 kinds of crop diseases. The AI model is a ResNet network that has been fine-tuned using crop images that were collected by web-scraping from Google Images and Plant-Village Dataset.
Effortlessly Deploy and Serve Large Language Models in the Cloud as an API Endpoint for Inference
This repository serves as a client to send sensor messages from ROS or other sources to the Inference server and processes the inference results.
A networked inference server for Whisper speech recognition
create your own llm inference server from scratch
Inference Server Implementation from Scratch for Machine Learning Models
Serving distributed deep learning models with model parallel swapping.
Audio components for geniusrise framework
The Universal LLM Gateway - Integrate ANY AI Model with One Consistent API
Vision and vision-multi-modal components for geniusrise framework
Add a description, image, and links to the inference-server topic page so that developers can more easily learn about it.
To associate your repository with the inference-server topic, visit your repo's landing page and select "manage topics."