DeepSeek has taken the AI community by storm, with 68 models available on Hugging Face as of today. This family of open-source models can be accessed through Hugging Face or Ollama, while DeepSeek-R1 and DeepSeek-V3 can be directly used for inference via DeepSeek Chat. In this blog, we’ll explore DeepSeek’s model lineup and guide you through running these models using Google Colab and Ollama.
DeepSeek offers a diverse range of models, each optimized for different tasks. Below is a breakdown of which model suits your needs best:
Also Read: Building AI Application with DeepSeek-V3
To run DeepSeek models on your local machine, you need to install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Once Ollama is installed, open your Command Line Interface (CLI) and pull the model:
ollama pull deepseek-r1:1.5b
You can explore other DeepSeek models available on Ollama here: Ollama Model Search.
This step may take some time, so wait for the download to complete.
ollama pull deepseek-r1:1.5b
pulling manifest
pulling aabd4debf0c8... 100% ▕████████████████▏ 1.1 GB
pulling 369ca498f347... 100% ▕████████████████▏ 387 B
pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB
pulling f4d24e9138dd... 100% ▕████████████████▏ 148 B
pulling a85fe2a2e58e... 100% ▕████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
Once the model is downloaded, you can run it using the command:
ollama run deepseek-r1:1.5b
The model is now available to use on the local machine and is answering my questions without any hiccups.
In this section, we’ll try out DeepSeek-Janus-Pro-1B using Google Colab. Before starting, make sure to set the runtime to T4 GPU for optimal performance.
Run the following command in a Colab notebook:
!git clone https://github.com/deepseek-ai/Janus.git
🔗 Explore more DeepSeek models on GitHub: DeepSeek AI GitHub Repository
Navigate to the cloned directory and install the required packages:
%cd Janus
!pip install -e .
!pip install flash-attn
Now, we’ll import necessary libraries and load the model onto CUDA (GPU):
import torch
from transformers import AutoModelForCausalLM
from janus.models import MultiModalityCausalLM, VLChatProcessor
from janus.utils.io import load_pil_images
# Define model path
model_path = "deepseek-ai/Janus-Pro-1B"
# Load processor and tokenizer
vl_chat_processor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
# Load model with remote code enabled
vl_gpt = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
# Move model to GPU
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
Now, let’s pass an image to the model and generate a response.
📷 Input Image
image_path = "/content/snapshot.png"
question = "What's in the image?"
conversation = [
{"role": "<|User|>", "content": f"<image_placeholder>\n{question}", "images": [image_path]},
{"role": "<|Assistant|>", "content": ""}
]
# Load image
pil_images = load_pil_images(conversation)
# Prepare inputs for the model
prepare_inputs = vl_chat_processor(conversations=conversation, images=pil_images, force_batchify=True).to(vl_gpt.device)
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
# Generate response
outputs = vl_gpt.language_model.generate(
inputs_embeds=inputs_embeds,
attention_mask=prepare_inputs.attention_mask,
pad_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=512,
do_sample=False,
use_cache=True,
)
# Decode and print response
answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True)
print(f"{prepare_inputs['sft_format'][0]}", answer)
Output:
<|User|>:
What’s in the image?
<|Assistant|>: The image features a section titled “Latest Articles” with a focus on a blog post. The blog post discusses “How to Access DeepSeek Janus Pro 7B?” and highlights its multimodal AI capabilities in reasoning, text-to-image, and instruction-following. The image also includes the DeepSeek logo (a dolphin) and a hexagonal pattern in the background.
We can see that the model is able to read the text in the image and also spot the Logo of DeepSeek in the image. Initial impressions, it is performing well.
Also Read: How to Access DeepSeek Janus Pro 7B?
DeepSeek is rapidly emerging as a powerful force in AI, offering a wide range of models for developers, researchers, and general users. As it competes with industry giants like OpenAI and Gemini, its cost-effective and high-performance models are likely to gain widespread adoption.
The applications of DeepSeek models are limitless, ranging from coding assistance to advanced reasoning and multimodal capabilities. With seamless local execution via Ollama and cloud-based inference options, DeepSeek is poised to become a game-changer in AI research and development.
If you have any questions or face issues, feel free to ask in the comments section!