Skip to main content

Quickstart

This guide walks you through installing jnc on your Jetson device, creating your first workspace, adding dependencies, and running code.

Install jnc

Run the following on your Jetson:
curl -L https://github.com/gentle-weapons/jnc-releases/releases/latest/download/pbctl-aarch64-unknown-linux-gnu.tar.gz | tar -xz
sudo mv pbctl /usr/local/bin/jnc
Verify the installation:
jnc --version

Verify your hardware

First, confirm jnc can detect your Jetson:
jnc probe
You should see output like:
{
  "arch": "aarch64",
  "l4t": "36.4.0",
  "jetpack": "6.1",
  "cuda": "12.2",
  "module": "Jetson Orin Nano",
  "compute_capability": "8.7"
}

Create a workspace

jnc init my-jetson-app
cd my-jetson-app
This creates a jnc.toml manifest file that defines your workspace’s dependencies, tasks, and target platform.

Add dependencies

Install packages from the conda ecosystem:
jnc add python numpy opencv
This resolves compatible versions, updates the manifest, and creates a lock file (jnc.lock) for reproducibility.

Set system requirements

Specify the L4T version your workspace requires. This ensures dependency resolution picks packages compatible with your Jetson: Edit jnc.toml and add:
[system-requirements]
l4t = "36"

Run commands

Run any command inside the workspace environment:
jnc run python -c "import cv2; print(f'OpenCV {cv2.__version__}')"

Define tasks

Add reusable tasks to your manifest:
jnc task add check-gpu "python -c 'import torch; print(torch.cuda.is_available())'"
Run them by name:
jnc run check-gpu

Enter the environment shell

Start an interactive shell with all dependencies available:
jnc shell
python -c "import numpy; print('Ready for development!')"
exit

What’s in the manifest?

After these steps, your jnc.toml looks something like:
[project]
name = "my-jetson-app"
channels = ["conda-forge"]
platforms = ["linux-aarch64"]

[system-requirements]
l4t = "36"

[dependencies]
python = ">=3.12,<4"
numpy = ">=2.2,<3"
opencv = ">=4.10,<5"

[tasks]
check-gpu = "python -c 'import torch; print(torch.cuda.is_available())'"

Next steps

  • Basic Usage — common workflows at a glance
  • Environments — manage separate dependency sets for inference, training, etc.
  • Tasks — advanced task configuration
  • CLI Reference — complete command documentation