Posts

Showing posts from October, 2023

Manual and automatic parameters in AI training and output

Language Models, such as GPT-3, have billions or trillions of parameters that allow them to understand and generate human-like text. These parameters are the numerical weights that the model uses to learn and represent language patterns during training. While some parameters, like the temperature in a generative model, are set explicitly for controlling behavior, the vast majority of parameters are learned automatically through a process called training, and humans do not set them manually. You can read more about the other manually-configured parameters here:  https://michaelehab.medium.com/the-secrets-of-large-language-models-parameters-how-they-affect-the-quality-diversity-and-32eb8643e631 Manual Parameters: Adapted from the blog above, we see some manual parameters are:  Some of the common LLM parameters are temperature, number of tokens, top-p, presence penalty, and frequency penalty. Temperature : Temperature is a hyperparameter used in generative language models. ...

What are "tensors" ?

In the context of AI and machine learning, a "tensor" is a fundamental data structure that represents multi-dimensional arrays of numerical values. Tensors can have various dimensions, including scalars (0-dimensional), vectors (1-dimensional), matrices (2-dimensional), and higher-dimensional arrays. Tensors are a core concept in libraries and frameworks commonly used for deep learning, such as TensorFlow and PyTorch. Here are some key points about tensors in AI: Data Representation: Tensors are used to represent data in a format that can be processed by neural networks and other machine learning algorithms. For example, in image processing, a color image can be represented as a 3D tensor, where the dimensions correspond to height, width, and color channels (e.g., red, green, and blue). Mathematical Operations: Tensors are designed to facilitate mathematical operations, including addition, multiplication, and more complex operations like convolution and matrix multiplication....

AI tests - post in progress

This post documents tests which are used to assess AI LLM performance. As I find more tests and understand them I will post them here. AGIeval is a test to see whether AIs such as yourself are able to pass human tests. (AGI evaluation). AGIEval focuses on advanced reasoning abilities beyond natural language through inductive, deductive, and spatial reasoning questions. Models must solve challenges like logical puzzles using language. It appears in a paper on ArXiv as follows:  [Submitted on 13 Apr 2023 (v1), last revised 18 Sep 2023 (this version, v2)] AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models by Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, Nan Duan.  Traditional benchmarks, which rely on artificial datasets, may not accurately represent human-level capabilities. In this paper, we introduce AGIEval, a novel benchmark specifically designed to assess foundation model in the context of human-centric...

Which LLM?

 I am experimenting with various LLMs. So far, there seems to be a tradeoff between intelligence, speed, and usability.  Thus far: - Falcon seems a bit stupid and gives some fake answers, but is reasonably fast. - LLaMA is reasonably fast, seems quite smart, but refuses to answer research questions on anything remotely controversial, e.g. politicians. I tried asking it about certain fake news stories as well, and it got all sanctimonious and told me to look at reputable sources... er... I assumed you, LLaMA, were a reputable source. Apparently not. In short, it is censored. Meaning it is useless. - GPT4All is not as good as LLaMa, so far. I can't tell yet if it is censored. - Others  are either too slow or give stupid answers. As I try them all out I will list pro/con here.