Setup Guide for TextEmbed
This guide provides detailed instructions for setting up the TextEmbed server using both PyPI and Docker. Follow the steps below to get started!
📝 Prerequisites
- Python Version: Ensure you have Python 3.10 or higher installed.
- Dependencies: Install all required dependencies for the TextEmbed server.
⚙️ Installation via PyPI
- 
Install the required dependencies: 
- 
Start the TextEmbed server with your desired models: 
- 
For more information and additional options, run: 
🔧 Available Server Options:
- --models: Comma-separated list of Huggingface models to be used, e.g.,- <Model1>,<Model2>.
- --served_model_names: Comma-separated list of names under which the models will be served.
- --host: The host address where the application will run.
- --port: The port number where the application will run.
- --workers: Number of worker processes for batch processing.
- --batch_size: The batch size for processing requests.
- --embedding_dtype: The data type for the embeddings (- binary,- float16, or- float32).
- --api_key: Your API key for authentication (Keep it secure and do not share it with others).
🐳 Running with Docker (Recommended)
Run TextEmbed using Docker for a more streamlined deployment. The Docker image is available on Docker Hub.
- 
Pull the Docker image: 
- 
Run the Docker container: 
- 
For more information and additional options, run: 
This command displays the help message for the TextEmbed server, detailing the available options and usage instructions.
🌐 Accessing the API
Once the server is running, you can access the API documentation via Swagger UI by navigating to http://localhost:8000/docs in your web browser.
🖼️ Image Embedding Example
TextEmbed now supports generating embeddings for images, such as using the SentenceTransformer CLIP model (sentence-transformers/clip-ViT-B-32).
📷 Steps to Generate Image Embeddings
- 
Convert Image to Base64 String: 
- 
Make a POST Request to the TextEmbed Server: 
📩 Example Request and Response
Request:
{
  "input": [
    "<Base64EncodedImageString>"
  ],
  "model": "sentence-transformers/clip-ViT-B-32",
  "user": "string"
}
Response: