FAQ

faq

Below is a list of frequently asked questions and common issues encountered.

Questions


Question

What models are recommended?

Answer

See the model guide.


Question

What is the best way to track the progress of an embeddings.index call?

Answer

Wrap the list or generator passed to the index call with tqdm. See #478 for more.


Question

What is the best way to analyze the content of a txtai index?

Answer

txtai has a console application that makes this easy. Read this article to learn more.


Question

How can models be externally loaded and passed to embeddings and pipelines?

Answer

Embeddings example.

  1. from transformers import AutoModel, AutoTokenizer
  2. from txtai.embeddings import Embeddings
  3. # Load model externally
  4. model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
  5. tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
  6. # Pass to embeddings instance
  7. embeddings = Embeddings(path=model, tokenizer=tokenizer)

LLM pipeline example.

  1. import torch
  2. from transformers import AutoModelForCausalLM, AutoTokenizer
  3. from txtai.pipeline import LLM
  4. # Load Mistral-7B-OpenOrca
  5. path = "Open-Orca/Mistral-7B-OpenOrca"
  6. model = AutoModelForCausalLM.from_pretrained(
  7. path,
  8. torch_dtype=torch.bfloat16,
  9. )
  10. tokenizer = AutoTokenizer.from_pretrained(path)
  11. llm = LLM((model, tokenizer))

Common issues


Issue

Embeddings query errors like this:

  1. SQLError: no such function: json_extract

Solution

Upgrade Python version as it doesn’t have SQLite support for json_extract


Issue

Segmentation faults and similar errors on macOS

Solution

Disable OpenMP threading via the environment variable export OMP_NUM_THREADS=1 or downgrade PyTorch to <= 1.12. See issue #377 for more.


Issue

ContextualVersionConflict and/or package METADATA exception while running one of the examples notebooks on Google Colab

Solution

Restart the kernel. See issue #409 for more on this issue.