Computing the MASE for historical_forecasts in darts

Posted on Thu 11 May 2023 in Time series prediction, darts • Tagged with Time series prediction, darts

I haven't found any example online on how to compute the MASE (Mean Absolute Scaled Error) for historical forecasts in darts, so I thought I'd share this example:

import numpy as np
from darts import TimeSeries
from darts.models import NaiveSeasonal
from darts.metrics import mase

my_data = [40, 42, 41, 45, 45, 47, 50]
my_np = np.array(my_data)
my_ts = TimeSeries.from_values(my_np)
my_model = NaiveSeasonal(K=1)
start = 3
my_forecast = my_model.historical_forecasts(series=my_ts, start=start, forecast_horizon=1)
my_mase = mase(actual_series=my_ts, pred_series=my_forecast, insample=my_ts[:start])
print(f"my_mase: {my_mase}")
my_ts.plot(label="original", marker="o")
my_forecast.plot(label="historical_forecasts", marker="x")

Notice that the insample parameter is set to the original time series up to the start of the historical forecast.


Projects to test AI chat bots locally

Posted on Tue 02 May 2023 in AI, chat bots, Python, LLM • Tagged with AI, chat bots, Python, LLM, ChatAI, ColossalChat, LocalAI, Text generation web UI, MLC LLM

Many people is interested in trying AI (Artificial Intelligence) chat bots locally, but they don't know how to start. These are some projects that I've found that can be used to test different LLMs (Language Learning Models) and chat bots:

  • Text generation web UI. It tries to be the AUTOMATIC1111/stable-diffusion-webui of text generation. Based on Gradio, it provides a web interface to test different LLMs.

  • Serge. It's an interface based on Svelte as web framework and llama.cpp for running mmodels. It's dockerized. It also provides an API.

  • openplayground. Works with local models and remote APIs. Can be installed as a Python package or run with Docker. It's implemented as a Flask application with a React frontend.

  • ChatAI. It is a desktop application for Windows and Ubuntu developed in Python with PyQt. It uses a Google T5-Flan model and it is based on A. I. Runner, a framework to run AI models.

  • ColossalChat. ColossalAI provides a set of tools to develop deep learning models, and it includes ColossalChat, which is a chat bot based …


Continue reading

Visual Studio Code unable to connect

Posted on Fri 21 April 2023 in Visual Studio Code • Tagged with Visual Studio Code, VS Code

Some times, I get this error:

Unable to connect to VS Code server: Error in request.
Error: connect ENOENT [...]

I always have to search for the solution, so I decided to write it down here. From this issue, I found that running this solves the problem:

VSCODE_IPC_HOOK_CLI=$(lsof | grep $USER | grep vscode-ipc | awk '{print $(NF-1)}' | head -n 1)

Metrics per stage in k6

Posted on Fri 25 November 2022 in k6 • Tagged with k6

If you want to have custom metrics per each stage in an executor, you have to tag the stages (with tagWithCurrentStageIndex()) and add bogus thresholds for each metric that you want with the tag stage:i (being i the number of each stage). The bogus threshold has to be different for gauges (max>0) and rates (rate>=0).

This example file shows how to do it:

import http from 'k6/http';
import { tagWithCurrentStageIndex } from 'https://jslib.k6.io/k6-utils/1.3.0/index.js';

const stages = [
    { target: 1, duration: '10s' },
    { target: 5, duration: '10s' },
];

export const options = {
  scenarios: {
    contacts: {
      executor: 'ramping-arrival-rate',
      timeUnit: '1s',
      preAllocatedVUs: 10,
      maxVUs: 200,
      stages: stages,
    },
  },
  // Uncomment the next line if you want the count statistic
  // summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(90)', 'p(95)', 'p(99)', 'count'],
  thresholds: {
    // Intentionally empty. We'll programatically define our bogus
    // thresholds (to generate the sub-metrics) below. In your real-world
    // load test, you can add any real threshoulds you want here.
  }
}

function addThreshold(thresholdName, threshold) {
    if (!options.thresholds[thresholdName]) {
        options.thresholds[thresholdName] = [];
    }

    // 'max>=0' is a …

Continue reading

Installing Tensorflow Serving in Amazon EC2 Linux

Posted on Fri 11 November 2022 in tensorflow, ec2 • Tagged with tensorflow

How to install and test Tensorflow Serving in a Amazon EC2 instance running Amazon EC2 Linux 2. We will use docker and we will serve a resnet image. These are the commands:

# Install docker
sudo yum update
sudo yum install docker
sudo usermod -a -G docker ec2-user
newgrp docker
sudo systemctl enable docker.service
sudo systemctl start docker.service

# Prepare the resnet model
rm -rf /tmp/resnet
wget https://tfhub.dev/tensorflow/resnet_50/classification/1?tf-hub-format=compressed -o resnet.tar.gz
mv 1?tf-hub-format=compressed resnet.tar.gz
mkdir -p /tmp/resnet/123
tar xvfz resnet.tar.gz -C /tmp/resnet/123/

# Create and run a docker image with tensorflow serving using the resnet model
docker run -d --name serving_base tensorflow/serving
docker cp /tmp/resnet serving_base:/models/resnet
docker commit --change "ENV MODEL_NAME resnet" serving_base $USER/resnet_serving
docker kill serving_base
docker rm serving_base
docker run -p 8500:8500 -p 8501:8501 -t $USER/resnet_serving &

To test that the serving works, you can run

sudo yum install git
git clone https://github.com/tensorflow/serving …

Continue reading