Jupyter Notebooks

Launch

Within the context of our AI tutorial, start by activating your conda environment (Or venv)

Once active, run the following command to launch Jupyter

jupyter lab

Jupyter Notebook = The original three languages that were supported were Julia, Python and R. Sure there is a missing E in the words, but i guess the name is 90% cool. The logo pays homage to Galileo’s discovery of the moons of Jupiter, So I am not sure whether the name can be called a pun since the one of them is a shortening of three words and not a name, and a name with a misspelled letter.

Enough about the name and logo, let us get into what a Jupyter Notebook is

Technically, it is a JSON file with a ipynb extension, in reality, it is a way to create a web document “With live code”, equations, among other things

the types of cells in a notebook are

  • Code Cells: Execute code and display the output.
  • Markdown Cells: Write formatted text using Markdown.
  • Raw NBConvert Cells: Include content that is not evaluated by the notebook kernel

So, the markup inside that JSON file is not specific to AI per say, it is used in many science fields, but as you work your way through this blog, you will see how important it is for what we are trying to achieve ! important in the sense that it makes development simpler, it is not a thing you will use in your final product

In any case, to start Jupyter Notebook. activate your Conda environment, then run the following command from the terminal where the current directory is your project directory !

jupyter lab

Then, assuming you are working on a project that your downloaded from github for example, you can open the ipynb files found in there by clicking on them form the menu on the left !

Code cells get executed by a background python thread (Kernel) in the background, step by step when you go to that cell and click shift+enter

Setting up Anaconda for AI

What is Anaconda ?

Conda, Like pip, is a python package manager, but conda is probably undisputed as the more thorough solution of the two, with better support for non-python packages (pip has very limited support) and support for more complex dependency trees

To clarify things, conda is the package manager while Anaconda is a bigger bundle, if you want to install conda alone, you are probably looking to install Miniconda. Anaconda is a set of about a hundred packages including conda, numpy, scipy, ipython notebook, and so on.

So, let us go through installing and using Anaconda on all 3 platforms, Windows, Linux and Mac

Linux

On Debian, there is no Anaconda package, to install, you will need to download the download script from anaconda and install it (Or conda, or miniconda for that matter) , you can add miniconda to apt using the “https://repo.anaconda.com” repo if you are willing to add it (apt install conda), but here I will assume you will just install Anaconda, and the only orthodox way to do that is with the installation script

Download the Anaconda installer from the Anaconda Website

https://www.anaconda.com/download

Navigate to the downloads folder, and execute the script just downloaded, in my case, the script’s name was Anaconda3-2024.10-1-Linux-x86_64.sh so I execute the following

cd /home/qworqs/Downloads
chmod 0777 Anaconda3-2024.10-1-Linux-x86_64.sh
./Anaconda3-2024.10-1-Linux-x86_64.sh

After accepting the agreement, I see the message

Anaconda3 will now be installed into this location:
/home/qworqs/anaconda3

To which i accepted the suggested location

Now, I opt to keep the installer in the downloads directory just in case something goes wrong, but you can safely delete the 1GB installer if you like !

At the end of the installation, the installer offers to update your shell, in my case, i opted NOT TO, if you opted otherwise, you can always “set auto_activate_base” to false….

Do you wish to update your shell profile to automatically initialize conda?
This will activate conda on startup and change the command prompt when activated.
If you'd prefer that conda's base environment not be activated on startup,
run the following command when conda is activated:

conda config --set auto_activate_base false

You can undo this by running `conda init --reverse $SHELL`? [yes|no]

The environment

So, to activate an environment, there is a yaml file for every project that contains the dependencies of that project ! if you are following the index page “setup”, you still don’t have one, once you do, come back here and do this, now for the sake of keeping this tutorial generic, let us assume you are in your project’s directory, and the yaml file is called environment.yml , the following command will create a python sub-environment, and install all the dependencies in that yaml file, be sure you are in the directory with the yaml file

//First, add anaconda to path by editing ~/.bash_profile, and adding the following to the bottom of the file
export PATH=~/anaconda3/bin:$PATH

Now, to apply the changes, you should either close the terminal window and re-open it, or run the command “source ~/.bashrc”

To check whether the magic happened, run the command “conda –version”

Now, to create a virtual environment, cd into the directory that has your project and run the following

conda env create -f environment.yml

Once the above is done downloading and installing, you should get a message like the one below

#
# To activate this environment, use
#
# $ conda activate projectName
#
# To deactivate an active environment, use
#
# $ conda deactivate

Now, when you open a terminal, and want to activate an environment

1- conda init (conda deactivate to reverse)
2- open a new shell
3- conda activate ProjectName

Deep Seek

What a pleasant surprise this is, something you can run locally on your computer, or use it for a fraction of the cost that comes with OpenAI or Anthropic’s Calude !

DeepSeek-V3 is completely open-source and free. (https://github.com/deepseek-ai/DeepSeek-V3)

If you don’t have the hardware resources for it, it is also available through a website identical to that of ChatGPT and an incredibly affordable API.

How affordable ?

Deep Seek: $0.14 per million input tokens and $0.28 per million output tokens.
Claude AI : $3.00 per million input tokens and $15.00 per million output tokens
ChatGPT : $2.50 per million input tokens and $10.00 per million output tokens

So, the bottom line is that deep seek is fifty times cheaper than Claude AI, and around 35 times cheaper than openAI ! that is, two percent and three percent of the price, But what about quality

in most scenarios, it is comparable, in some cases Deep Seek wins, in other cases, claude or ChatGPT wins, but it is up there obviously !

Ollama

1- Installing

1.1 – Linux

On Debian linux, Installing Ollama is a one liner, just enter the following in your terminal

curl -fsSL https://ollama.com/install.sh | sh

Yup, that is it, move on to using Ollama

1.2 Windows and MAC

Just go to https://ollama.com/, download it, and run the installer ! you are done

Using it !

Using Ollama is simple, just open your terminal window or command prompt , then activate your conda environment (Or venv) , and run the following command, for the sake of this example, I will run

conda activate projectName
ollama run llama3.2

llama3.3 with its 70 billion parameters will require a minimum of 64GB of ram, so don’t try that unless you have the RAM for it ! for comparison, 3.2 has 2 billion, which is around 3% of 3.3

It should now probably download about 2GBs of data (The model llama3.2 has around 2 billion parameters) And you are done, Now you can ask it anything

For example, create an article for me explaining this and that,

Once done, just enter “/bye” to exit the ollama prompt and quit the session

If you want to for example clear the context or do anything else, just use the command /? for a list of commands

Now, you have used the lama3.2, but on this ollama models page, you will find that there are many others that you can use !

Others include models that help you with coding, or models that are more targeted towards chat-bot QA, either way, you should take a close look at them, even if for the fun of it

Is ollama running ?

Just visit (http://localhost:11434/) in your browser, and you should see the message (Ollama is running)

Modifying a model’s settings

An example of what you may want to modify may be for example If you have a GPU, but you do not want it to be used by Ollama, to do this, you will need to create a model file, the steps to creating this file for llama 3.2, (The small one) are as follows

# Copy the llama 3.2 base file
ollama show llama3.2:latest --modelfile > ~/cpullama3.2.modelfile
# edit the file ~/cpullama3.2 and edit the FROM line to read
FROM llama3.2:latest
# go to the parameters section, and add the parameters you need
# In our case, PARAMETER num_gpu 0
PARAMETER num_gpu 0
# Create your custom model
ollama create cpullama3.2 --file cpullama3.2.modelfile

The last command above resulted in the following output

transferring model data 
using existing layer sha256:dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
using existing layer sha256:fcc5a6bec9daf9b561a68827b67ab6088e1dba9d1fa2a50d7bbcc8384e0a265d
using existing layer sha256:a70ff7e570d97baaf4e62ac6e6ad9975e04caa6d900d3742d37698494479e0cd
using existing layer sha256:966de95ca8a62200913e3f8bfbf84c8494536f1b94b49166851e76644e966396
using existing layer sha256:fcc5a6bec9daf9b561a68827b67ab6088e1dba9d1fa2a50d7bbcc8384e0a265d
using existing layer sha256:a70ff7e570d97baaf4e62ac6e6ad9975e04caa6d900d3742d37698494479e0cd
creating new layer sha256:650ff8e84978b35dd2f3ea3653ed6bf020a95e7deb031ceae487cdd98dedc2e3
creating new layer sha256:f29c86d4cf6a4072deefa0ff196b7960da63b229686497b02aad4f5202d263ea
writing manifest
success

Above, although you simply created a “model” by copying the existing model’s config file ! nothing more nothing less

Ollama API

So, above, your terminal allowed you to chat with the model, much like what you do when you open Claude or ChatGPT, if you want to access things via API, here is how.

Everything AI – TOC

This blog has plenty of posts about AI, some are about AI tools, others are about installing AI locally, so this post is where I am putting all the AI stuff I have ever blogged about in one place !

The section Local AI is about creating your own AI server using freely available sources, the API section lists all the services that provide an API but can not be installed locally, and the Online Services is where you can get things done via AI online (That can’t be installed locally or accessed programmatically via API)

Continue reading “Everything AI – TOC”

Coding with Claude

I tried using cursor-ai for some time, basically for HTML, CSS, a bit of vanilla JavaScript, and Tailwind, not bad at all with the above, but later, after two weeks, the limit became unusable, turns out the “Free” comes with a 2 week free trial

Without it, you are switched from Claude to “Cursor-small” in the CTRL+L menu, and the autocomplete stops working

In any case, I think subscribing to cursor-ai is a good idea, the time it saves you writing things that are not core to your work is well worth it, but I wanted to experiment with something new, so….

Cursor-AI is basically a modified VS Code, and since Claude 3.5 Sonnet seems to be my favorite coding AI, i researched a bit and found that there is a plugin for VS-Code that does the same if I subscribe to Anthropic API !

So, after getting my API key from anthropic (the people behind Claude), all i needed to do was install a plugin (Tried more than one plugin, CodeGPT, and Cline by Saoud Rizwan), once installed, I need to enter my API key into the settings, and that is it.

Cline suits my needs better than CodeGPT for the time being, but i did not use them extensively yet, so it is too early to declare a winner