API

api api

txtai has a full-featured API, backed by FastAPI, that can optionally be enabled for any txtai process. All functionality found in txtai can be accessed via the API.

The following is an example configuration and startup script for the API.

Note: This configuration file enables all functionality. For memory-bound systems, splitting pipelines into multiple instances is a best practice.

  1. # Index file path
  2. path: /tmp/index
  3. # Allow indexing of documents
  4. writable: True
  5. # Enbeddings index
  6. embeddings:
  7. path: sentence-transformers/nli-mpnet-base-v2
  8. # Extractive QA
  9. extractor:
  10. path: distilbert-base-cased-distilled-squad
  11. # Zero-shot labeling
  12. labels:
  13. # Similarity
  14. similarity:
  15. # Text segmentation
  16. segmentation:
  17. sentences: true
  18. # Text summarization
  19. summary:
  20. # Text extraction
  21. textractor:
  22. paragraphs: true
  23. minlength: 100
  24. join: true
  25. # Transcribe audio to text
  26. transcription:
  27. # Translate text between languages
  28. translation:
  29. # Workflow definitions
  30. workflow:
  31. sumfrench:
  32. tasks:
  33. - action: textractor
  34. task: url
  35. - action: summary
  36. - action: translation
  37. args: ["fr"]
  38. sumspanish:
  39. tasks:
  40. - action: textractor
  41. task: url
  42. - action: summary
  43. - action: translation
  44. args: ["es"]

Assuming this YAML content is stored in a file named config.yml, the following command starts the API process.

  1. CONFIG=config.yml uvicorn "txtai.api:app"

uvicorn is a full-featured production ready server with support for SSL and more. See the uvicorn deployment guide for details.

Connect to API

The default port for the API is 8000. See the uvicorn link above to change this.

txtai has a number of language bindings which abstract the API (see links below). Alternatively, code can be written to connect directly to the API. Documentation for a live running instance can be found at the /docs url (i.e. http://localhost:8000/docs). The following example runs a workflow using cURL.

  1. curl \
  2. -X POST "http://localhost:8000/workflow" \
  3. -H "Content-Type: application/json" \
  4. -d '{"name":"sumfrench", "elements": ["https://github.com/neuml/txtai"]}'

Local instance

A local instance can be instantiated. In this case, a txtai application runs internally, without any network connections, providing the same consolidated functionality. This enables running txtai in Python with configuration.

The configuration above can be run in Python with:

  1. from txtai.app import Application
  2. # Load and run workflow
  3. app = Application(config.yml)
  4. app.workflow("sumfrench", ["https://github.com/neuml/txtai"])

See this link for a full list of methods.

Run with containers

The API can be containerized and run. This will bring up an API instance without having to install Python, txtai or any dependencies on your machine!

See this section for more information.

Supported language bindings

The following programming languages have bindings with the txtai API:

See the link below for a detailed example covering how to use the API.

NotebookDescription
API GalleryUsing txtai in JavaScript, Java, Rust and GoOpen In Colab