Part 1: Local VAME Project with GPU#
Note: Open this jupyter notebook in your VAME environment.
Initialize VAME#
import vame
import tkinter
from tkinter import filedialog
Download VAME Example Data#
Download video-1.csv
and video-1.mp4
from here: https://drive.google.com/drive/folders/1feCukw2H0teLvaPbwelhD2E2ns_viHFX?usp=sharing and save it to your own Google Drive.
Step 1: Create a new VAME project#
project = 'NewVAMEProject_20211201'
working_directory = filedialog.askdirectory(title = 'Choose a directory for new VAME project')
videos = filedialog.askopenfilenames(title = 'Choose raw videos for new VAME project')
Step 1.1: Initiazile a VAME Project#
With the variables above, initialize a new project and save the path to the config.yaml
file as config
. This will create a new project folder with a predefined structure in your working_directory.
config = vame.init_new_project(project=project, videos=videos, working_directory=working_directory, videotype='.mp4')
Alternatively, you can open an existing VAME project by linking to the existing config.yaml
file
config = '/YOUR/WORKING/DIRECTORY/NewVAMEProject_20211201/config.yaml'
Step 1.2: Get your data ready#
First, move your DeepLabCut .csv
data to the corresponding videos/pose_estimation/
directory.
Then, use the function below to align your behavior videos egocentrically given pose_ref_index
as a list of reference coordinate for alignment. In this case [0,5]
. Example: 0: snout, 1: forehand_left, 2: forehand_right, 3: hindleft, 4: hindright, 5: tail
vame.egocentric_alignment(config, pose_ref_index=[0,5])
Alternatively, if your data is already egocentrically aligned, or you don’t really believe that behavior should be studied egocentrically, transform your DeepLabCut .csv
files to numpy arrays.
vame.csv_to_numpy(config, datapath='C:\\Research\\VAME\\vame_alpha_release-Mar16-2021\\videos\\pose_estimation\\')
Step 1.3: Create the Training Set#
vame.create_trainset(config)
Step 2: Train your VAME Model#
vame.train_model(config)
Step 3: Evaluate the model#
vame.evaluate_model(config)
Step 4: Motif segmentation#
vame.pose_segmentation(config)
Step 5: Create Motif Videos#
vame.motif_videos(config, videoType='.mp4')
Step 6: Create behavioral hierarchies#
vame.community(config, show_umap=False, cut_tree=2)
Step 7: Create community videos#
vame.community_videos(config)
Step 8: Create UMAP visualization#
vame.visualization(config, label=None) #options: label: None, "motif", "community"
Step 9: Create GIF Animation#
# Note: This function is currently very slow. Once the frames are saved you can create a video
# or gif via e.g. ImageJ or other tools
vame.gif(config, pose_ref_index=[0,5], subtract_background=True, start=None,
length=500, max_lag=30, label='community', file_format='.mp4', crop_size=(300,300))
Step 10: Create a generative model#
vame.generative_model(config, mode="centers") #options: mode: "sampling", "reconstruction", "centers", "motifs"