Posts

Showing posts from 2025

Juggling with Google's Gemini's Canvases

Tip 1: The "Master Canvas" or "Table of Contents" Method (Your Core Problem)  This is the most effective workaround for managing multiple canvases in a single conversation.   Create a Master Canvas :  At the very beginning of your project or complex conversation, create a dedicated canvas for navigation. You can start with a prompt like: "Create a canvas for me that will act as a table of contents for this entire project on [Your Topic]. Title it 'Project Dashboard'."  Generate and Link:  As you work, whenever you need a new, dedicated space for a sub-topic, create a new canvas. "Okay, now create a new canvas to brainstorm marketing angles for this project." Get the Link: Once Gemini creates the new canvas, click on it. In the top-right corner of the Canvas interface, you'll see a "Share" button. Click it, and then click "Copy link".  Update the Master Canvas:  Go back to your "Project Dashboard" canvas. ...

The garden path to the internet

Signpost: Welcome new user, to the computer world. What happens when hours and hours are spent by thousands of people doing mundane things? Well for a start, we get condensed versions of truth. Hours spent converting CD music to mp3 and months spent downloading mp3 songs led to the knowledge that 96kbps is smaller in memory and still pretty good quality, while 256 kbps is perfect quality but hugely wasteful for memory. Anything less or more is extraneous to human ears. 44000hz is fine.  But I feel like I have to atleast show you how I grew into the computer world, from getting a PC in school holidays to play 3d games, to playing on LAN with gamers just before gaming became e-sports.  I'll index these memoirs chronologically, atleast, in order of reality and not in writing.  First toy  My first toy was a computer that asked questions, like spelling questions. If you typed in the correct word, it would say "yes, you are right. Find the answer to question _3_" ...

Ollama -

Ollama is how you can run a LLM in your home, without the internet. Trust me, it is fun.  Installing Ollama Hardware! Pick the model that you want to run. If the model is large, then you will need a large amount of RAM. The size of the model is measured in Parameters. One billion parameters will require one Gigabyte of RAM. 16B parameters requires 16GB of RAM. 500M parameters requires 500MB of RAM.  Then you will need a NVIDIA GPU which has a "Compute Complexity" greater than 5.  Why? The embedding space is where each word and syllable is mapped using a large amount of dimensions. Imagine each word as a point floating in space. From the center of space, each point is like a radius with its various angles. An easy way to compute how close each point is to all other neighbouring points is by doing the same mathematics (Cosine Affinity) that the NVIDIA GPUs instruction sets can do across hundreds of cores, while your more complex CPU with its handful of cores can do these...