Just starting out? Try Jina's "Hello, World" - jina hello --help

A simple image neural search demo for Fashion-MNIST. No extra dependencies needed, simply run:
jina hello fashion # more options in --help
...or even easier for Docker users, no install required:
docker run -v "$(pwd)/j:/j" jinaai/jina hello fashion --workdir /j && open j/hello-world.html
replace "open" with "xdg-open" on Linux

For NLP engineers, we provide a simple chatbot demo for answering Covid-19 questions. To run that:
pip install "jina[chatbot]"
jina hello chatbot
This downloads CovidQA dataset and tells Jina to index 418 question-answer pairs with DistilBERT. The index process takes about 1 minute on CPU. Then it opens a web page where you can input questions and ask Jina.

A multimodal-document contains multiple data types, e.g. a PDF document often contains figures and text. Jina lets you build a multimodal search solution in just minutes. To run our minimum multimodal document search demo:
pip install "jina[multimodal]"
jina hello multimodal
This downloads people image dataset and tells Jina to index 2,000 image-caption pairs with MobileNet and DistilBERT. The index process takes about 3 minute on CPU. Then it opens a web page where you can query multimodal documents. We have prepared a YouTube tutorial to walk you through this demo.