Alpaca-LoRA

De Lea Linux
Révision datée du 31 décembre 2023 à 10:58 par Jiel (discussion | contributions)
(diff) ← Version précédente | Voir la version actuelle (diff) | Version suivante → (diff)
Aller à la navigation Aller à la recherche
Attention ! Cet article est en cours de rédaction. Il n'a donc encore été ni relu, ni corrigé, ni validé par un modérateur.
Léa vous encourage à éditer les articles pour les améliorer ou les corriger.

Clonez le dépôt git :

$ git clone https://github.com/tloen/alpaca-lora.git

Clonage dans 'alpaca-lora'...
remote: Enumerating objects: 607, done.
remote: Total 607 (delta 0), reused 0 (delta 0), pack-reused 607
Réception d'objets: 100% (607/607), 27.84 Mio | 3.65 Mio/s, fait.
Résolution des deltas: 100% (358/358), fait.

Allez dans le répertoire du dépôt :

$ cd ./alpaca-lora/

Installez les dépendances python avec pip (cette étape peut prendre plusieurs minutes) :

$ pip install -r requirements.txt

docker logs --tail 50 --follow --timestamps 20637ec29080
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/huggingface/peft.git (from -r requirements.txt (line 9))
  Cloning https://github.com/huggingface/peft.git to /tmp/pip-req-build-9hr4dfxr
  Running command git clone --filter=blob:none --quiet https://github.com/huggingface/peft.git /tmp/pip-req-build-9hr4dfxr
  Resolved https://github.com/huggingface/peft.git to commit cf04d0353f0343cbf66627228c4495f51669af34
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting accelerate
  Downloading accelerate-0.25.0-py3-none-any.whl (265 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 265.7/265.7 kB 2.5 MB/s eta 0:00:00
Collecting appdirs
  Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting loralib

(...)

Successfully installed accelerate-0.25.0 aiofiles-23.2.1 aiohttp-3.9.1 aiosignal-1.3.1 altair-5.2.0 annotated-types-0.6.0 anyio-4.2.0 appdirs-1.4.4 asttokens-2.4.1 attrs-23.1.0 bitsandbytes-0.41.3.post2 black-23.12.1 colorama-0.4.6 contourpy-1.2.0 cycler-0.12.1 datasets-2.16.0 dill-0.3.7 executing-2.0.1 fastapi-0.108.0 ffmpy-0.3.1 filelock-3.13.1 fire-0.5.0 fonttools-4.47.0 frozenlist-1.4.1 fsspec-2023.10.0 gradio-4.12.0 gradio-client-0.8.0 h11-0.14.0 httpcore-1.0.2 httpx-0.26.0 huggingface-hub-0.20.1 importlib-resources-6.1.1 ipython-8.19.0 jedi-0.19.1 jinja2-3.1.2 jsonschema-4.20.0 jsonschema-specifications-2023.12.1 kiwisolver-1.4.5 loralib-0.1.2 markupsafe-2.1.3 matplotlib-3.8.2 matplotlib-inline-0.1.6 mpmath-1.3.0 multidict-6.0.4 multiprocess-0.70.15 mypy-extensions-1.0.0 networkx-3.2.1 numpy-1.26.2 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.18.1 nvidia-nvjitlink-cu12-12.3.101 nvidia-nvtx-cu12-12.1.105 orjson-3.9.10 pandas-2.1.4 parso-0.8.3 pathspec-0.12.1 peft-0.7.2.dev0 platformdirs-4.1.0 prompt-toolkit-3.0.43 psutil-5.9.7 pure-eval-0.2.2 pyarrow-14.0.2 pyarrow-hotfix-0.6 pydantic-2.5.3 pydantic-core-2.14.6 pydub-0.25.1 pyparsing-3.1.1 python-multipart-0.0.6 referencing-0.32.0 rpds-py-0.16.2 safetensors-0.4.1 semantic-version-2.10.0 sentencepiece-0.1.99 shellingham-1.5.4 sniffio-1.3.0 stack-data-0.6.3 starlette-0.32.0.post1 sympy-1.12 termcolor-2.4.0 tokenize-rt-5.2.0 tokenizers-0.15.0 tomlkit-0.12.0 toolz-0.12.0 torch-2.1.2 tqdm-4.66.1 traitlets-5.14.0 transformers-4.36.2 triton-2.1.0 tzdata-2023.4 uvicorn-0.25.0 wcwidth-0.2.12 xxhash-3.4.1 yarl-1.9.4

S'il n'est pas encore installé, installez docker. Par exemple sous Fedora:

sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Ou sous Ubuntu

Commencez par ajouter la clef GPG officielle de docker:

sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

Puis ajouter le dépôt à Apt sources:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

Enfin installez docker:

$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Démarrez docker:

$ systemctl start docker

Construisez le conteneur pour Alpaca-LoRA (cette étape peut prendre plusieurs minutes):

$ sudo docker build -t alpaca-lora .
[+] Building 542.5s (4/10)                                                                                                                                      docker:default
 => [internal] load .dockerignore                                                                                                                                         0.0s
 => => transferring context: 141B                                                                                                                                         0.0s
 => [internal] load build definition from Dockerfile                                                                                                                      0.0s
 => => transferring dockerfile: 733B                                                                                                                                      0.0s
 => [internal] load metadata for docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04                                                                                           2.1s
 => [internal] load build context                                                                                                                                         0.5s
 => => transferring context: 118.27MB                                                                                                                                     0.4s
 => [1/6] FROM docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f

(...)

[+] Building 1.1s (11/11) FINISHED docker:default

=> [internal] load .dockerignore                                                                                                                                                                                       0.0s
=> => transferring context: 141B                                                                                                                                                                                       0.0s
=> [internal] load build definition from Dockerfile                                                                                                                                                                    0.0s
=> => transferring dockerfile: 733B                                                                                                                                                                                    0.0s
=> [internal] load metadata for docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04                                                                                                                                         1.0s
=> [1/6] FROM docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f                                                                                   0.0s
=> [internal] load build context                                                                                                                                                                                       0.0s
=> => transferring context: 7.33kB                                                                                                                                                                                     0.0s
=> CACHED [2/6] RUN apt-get update && apt-get install -y     git     curl     software-properties-common     && add-apt-repository ppa:deadsnakes/ppa     && apt install -y python3.10     && rm -rf /var/lib/apt/lis  0.0s
=> CACHED [3/6] WORKDIR /workspace                                                                                                                                                                                     0.0s
=> CACHED [4/6] COPY requirements.txt requirements.txt                                                                                                                                                                 0.0s
=> CACHED [5/6] RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10     && python3.10 -m pip install -r requirements.txt     && python3.10 -m pip install numpy --pre torch --force-reinstall --index-url   0.0s
=> CACHED [6/6] COPY . .                                                                                                                                                                                               0.0s
=> exporting to image                                                                                                                                                                                                  0.0s
=> => exporting layers                                                                                                                                                                                                 0.0s
=> => writing image sha256:867aeac24d83cc145aa7c002aec3758026703bbeec79081f3f6b88b63257654b                                                                                                                            0.0s
=> => naming to docker.io/library/alpaca-lora   

</syntaxhighlight>

Si la commande finit en erreur, et que vous ne comprenez pas l'erreur, relancez-la. Si c'est une des commandes à l'intérieur de la commande qui plante, essayez de la lancer indépendamment, puis de relancer la commande originale.

On choisit un modèle parmi ceux disponibles sur https://huggingface.co/models. Ici on va choisir le modèle vigogne, pour le français. Un binaire contenant plusieurs gigaoctets va être téléchargés.

$ sudo docker run --shm-size 64g -p 7860:7860 -v ${HOME}/.cache:/root/.cache --rm alpaca-lora generate.py --load_8bit --base_model 'bofenghuang/vigostral-7b-chat' --lora_weights 'bofenghuang/vigostral-7b-chat'

docker run --shm-size 64g -p 7861:7861 -v ${HOME}/.cache:/root/.cache --rm alpaca-lora generate.py --load_8bit --base_model 'baffo32/decapoda-research-llama-7B-hf' --lora_weights 'tloen/alpaca-lora-7b'