Posts

script to pull github activity report

 import os import requests import csv from datetime import datetime # Configuration REPO = 'username/reponame'  # Repository name START_DATE = '2023-09-03T00:00:00Z'  # Start of the date range END_DATE = '2023-11-13T23:59:59Z'  # End of the date range GITHUB_TOKEN = os.getenv('GITHUB_TOKEN')  # Retrieve GitHub token from environment variable headers = {     "Authorization": f"token {GITHUB_TOKEN}",     "Accept": "application/vnd.github.v3+json" } output_file = 'github_report.csv' def fetch_commits():     """Fetch commits within the specified date range."""     url = f'https://api.github.com/repos/{REPO}/commits'     params = {         'since': START_DATE,         'until': END_DATE,     }     response = requests.get(url, headers=headers, params=params)     commits = response.json()          commit_data = []     for commit in commits:         date = commit

Access to model mistralai/Mistral-Nemo-Base-2407 is restricted. You must be authenticated to access it.

 If you get an error like Access to model mistralai/Mistral-Nemo-Base-2407 is restricted. You must be authenticated to access it. Then pip install huggingface_hub huggingface-cli login it will ask you for a token go to https://huggingface.co/settings/tokens it will have an option to generate a token make sure you click the checkboxes for APIs generate the token copy it go back to the terminal at the top paste.

Testing ChatGPT on coding - a script to make CSV to ICS

 So I gave ChatGPT a task of writing a CSV to ICS convertor, to take a spreadsheet of events and turn it into individual calendar file invitations. The idea is you can make a calendar with lots of event items in a spreadsheet, and then output ICS that you can just double-click to add to your calendar automatically. So here's the spreadsheet: "Event Name","Event Start Date","Event End Date","Event Start Time","Event End Time","event description or notes","URL applicable" "Meeting with John","2024-06-10","2024-06-10","09:00","10:00","Discuss project updates.","http://example.com" And here's the script ChatGPT made. It seems to get confused with things like Daylight Savings and/or Timezone but you can fix that manually. It took some debugging and I had to copy/paste my Mac's manpage for DATE into ChatGPT, but it got it eventually. The only

Hallucinations/inaccuracies

 You have to test LLMs individually for hallucinations and inaccuracies. These below are attempts at summarising my first academic article. Useless! john@john-GF63-Thin-11SC:~/ai$ ./scripts/ollama_summarise_one.sh SAMPLES/hawaiiarticle.txt  Sure, here's the paragraph you requested: >The problem with some of the analyses of Libet is that they make it look  like the details were complicated. However, I do believe I agree with  McCrone that Libet's data is sloppier than people often realize. Glynn  made that point a long time ago and it was the main point of my plots in  the Tucson III Proceedings. john@john-GF63-Thin-11SC:~/ai$ ./scripts/ollama_summarise_one.sh SAMPLES/hawaiiarticle.txt  Sure, here is the paragraph you requested: >Libet showed that a thalamic stimulus requires a duration of about 250  msec to be felt whereas a 20 msec skin stimulus is adequate. This shows  that the intensity of the stimulus plays an important role in determining  the duration of the require

Ollama3 works - running a local LLM

Do this: curl -fsSL https://ollama.com/install.sh | sh ollama create llama3  It will then download the llama3 model, about 4.7 gb to run it: ollama run llama3  if you want it to read and summarise a textfile called textfile.txt: ollama run llama3 "Read the text file $(cat textfile.txt) and summarise the findings in one paragraph of no more than 300 words"  And here's a gui for it: https://github.com/amithkoujalgi/ollama-pdf-bot The system runs as a service (daemon) on Linux. CPU usage is high but not bad and response time much better than localGPT. It stores the model files (large LLM blobs) in  /usr/share/ollama/.ollama/models The models can be dowloaded from huggingface.io or https://ollama.com/library

Some rough notes on FreedomGPT and PrivateGPT

 The following are rough notes. I'll tidy them up when I get the steps 100% right. https://www.freedomgpt.com/ https://huggingface.co/mosaicml/mpt-30b https://files.mistral-7b-v0-1.mistral.ai/mistral-7B-v0.1.tar use case: wordpress viruses; recognising one https://www.icloud.com/shortcuts/8bcc2979d6e34a3d90a48c3f77f96d68 openai - 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 hugg tok - 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 https://developer.nvidia.com/blog/nvidia-releases-open-source-gpu-kernel-modules/ https://github.com/eosphoros-ai/DB-GPT privateGPT fix: sudo apt-get install python3-dev python3.10 -m pip install --upgrade pip sudo apt install build-essential python3.10 -m pip install hnswlib python3.10 -m pip install chromadb then python3.10 -m pip install chroma-migrate chroma-migrate python3.10 -m pip install -r requirements.txt pip install pycc How to download the models: Visit the Code Llama repository in GitHub and follow the instructions in the README to run the down

Manual and automatic parameters in AI training and output

Language Models, such as GPT-3, have billions or trillions of parameters that allow them to understand and generate human-like text. These parameters are the numerical weights that the model uses to learn and represent language patterns during training. While some parameters, like the temperature in a generative model, are set explicitly for controlling behavior, the vast majority of parameters are learned automatically through a process called training, and humans do not set them manually. You can read more about the other manually-configured parameters here:  https://michaelehab.medium.com/the-secrets-of-large-language-models-parameters-how-they-affect-the-quality-diversity-and-32eb8643e631 Manual Parameters: Adapted from the blog above, we see some manual parameters are:  Some of the common LLM parameters are temperature, number of tokens, top-p, presence penalty, and frequency penalty. Temperature : Temperature is a hyperparameter used in generative language models. It controls th