Part 2: Cloud VAME Project without GPU#

Note: Open this jupyter notebook in GoogleColab.

GoogleColab Setup#

# Mount drive
from google.colab import drive
drive.mount('/content/drive')

Open Google Drive in separate tab for better overview of directory structure: https://drive.google.com/drive/my-drive

Click on Runtime > Change runtime type and select GPU hardware acceleration. Check access to your GPU with the command below.

!nvidia-smi
# Import some packages ...
import pip
import jupyter
import torchvision
# Install more packages ...
!pip install pytest-shutil
!pip install scipy
!pip install numpy
!pip install matplotlib
!pip install pathlib
!pip install pandas
!pip install ruamel.yaml
!pip install sklearn
!pip install pyyaml
!pip install opencv-python-headless
!pip install h5py
!pip install umap
!pip install umap-learn
!pip install networkx
!pip install tqdm
# check pip installed packages
# !pip list -v

Download and Install VAME#

# Download VAME
!git clone https://github.com/LINCellularNeuroscience/VAME.git
%cd /content/VAME
!python setup.py install
import vame

Download VAME Example Data#

Download video-1.csv and video-1.mp4 from here: https://drive.google.com/drive/folders/1feCukw2H0teLvaPbwelhD2E2ns_viHFX?usp=sharing and save it to your own Google Drive.

Step 1: Create a new VAME project#

project = 'NewVAMEProject_20211201'
working_directory = '/content/drive/MyDrive/VAME_Example'
videos = '/content/drive/MyDrive/VAME_Example/video-1.mp4'

Step 1.1: Initiazile a VAME Project#

With the variables above, initialize a new project and save the path to the config.yaml file as config. This will create a new project folder with a predefined structure in your working_directory.

config = vame.init_new_project(project=project, videos=videos, working_directory=working_directory, videotype='.mp4')

Alternatively, you can open an existing VAME project by linking to the existing config.yaml file

config = '/YOUR/WORKING/DIRECTORY/NewVAMEProject_20211201/config.yaml'

Step 1.2: Get your data ready#

First, move your DeepLabCut .csv data to the corresponding videos/pose_estimation/ directory.

Then, use the function below to align your behavior videos egocentrically given pose_ref_index as a list of reference coordinate for alignment. In this case [0,5]. Example: 0: snout, 1: forehand_left, 2: forehand_right, 3: hindleft, 4: hindright, 5: tail

vame.egocentric_alignment(config, pose_ref_index=[0,5])

Alternatively, if your data is already egocentrically aligned, or you don’t really believe that behavior should be studied egocentrically, transform your DeepLabCut .csv files to numpy arrays.

vame.csv_to_numpy(config, datapath='C:\\Research\\VAME\\vame_alpha_release-Mar16-2021\\videos\\pose_estimation\\')

Step 1.3: Create the Training Set#

vame.create_trainset(config)

Step 2: Train your VAME Model#

vame.train_model(config)

Step 3: Evaluate the model#

vame.evaluate_model(config)

Step 4: Motif segmentation#

vame.pose_segmentation(config)

Step 5: Create Motif Videos#

vame.motif_videos(config, videoType='.mp4')

Step 6: Create behavioral hierarchies#

vame.community(config, show_umap=False, cut_tree=2)

Step 7: Create community videos#

vame.community_videos(config)

Step 8: Create UMAP visualization#

vame.visualization(config, label=None) #options: label: None, "motif", "community"

Step 9: Create GIF Animation#

# Note: This function is currently very slow. Once the frames are saved you can create a video
# or gif via e.g. ImageJ or other tools
vame.gif(config, pose_ref_index=[0,5], subtract_background=True, start=None, 
         length=500, max_lag=30, label='community', file_format='.mp4', crop_size=(300,300))

Step 10: Create a generative model#

vame.generative_model(config, mode="centers") #options: mode: "sampling", "reconstruction", "centers", "motifs"