Skip to main content
Integrations

GPU compute wherever
you already work

Run GPU jobs directly from your CI pipeline or Jupyter notebook — no infrastructure changes required.

GitHub Actions

The ghostnexus/ghostnexus-run action submits your script to a GhostNexus GPU, streams logs back to your CI run, and fails the workflow if the job fails.

1Add your API key as a repository secret

Go to Settings → Secrets → Actions and add:

GHOSTNEXUS_API_KEY = your key from the dashboard

2Reference a script file in your repo

yaml — .github/workflows/train.yml
# .github/workflows/train.yml
name: GPU Training

on: [push]

jobs:
  train:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run on GPU
        uses: ghostnexus/ghostnexus-run@v1
        with:
          api-key: ${{ secrets.GHOSTNEXUS_API_KEY }}
          task-name: train-${{ github.sha }}
          script-path: scripts/train.py
          timeout-minutes: 60

3Or write an inline script

yaml — inline script
- name: GPU benchmark
  uses: ghostnexus/ghostnexus-run@v1
  with:
    api-key: ${{ secrets.GHOSTNEXUS_API_KEY }}
    task-name: benchmark
    script: |
      import torch
      t = torch.randn(4096, 4096).cuda()
      result = torch.mm(t, t)
      print(f"GPU: {torch.cuda.get_device_name(0)}")
      print(f"VRAM: {torch.cuda.memory_allocated()/1e9:.1f} GB")

Available inputs

InputRequiredDescription
api-keyYesYour GhostNexus API key (use secrets.GHOSTNEXUS_API_KEY)
task-nameNoJob name shown in dashboard (default: ci-job)
script-pathEither/orPath to a .py file in your repository
scriptEither/orInline Python script to run
timeout-minutesNoMax wait time in minutes (default: 30)

Available outputs

job-idGhostNexus job identifier
statuscompleted | failed | timeout
cost-creditsCredits consumed by the job
duration-secondsActual GPU runtime in seconds

Jupyter Magic Commands

Install the ghostnexus-magic package to add a %%ghostnexus cell magic to any JupyterLab or classic Jupyter notebook. The cell runs on a remote GPU; output streams back inline.

1Install and configure

bash
# Install
pip install ghostnexus-magic

# In your notebook
%load_ext ghostnexus_magic
%ghostnexus_config --api-key YOUR_API_KEY

2Use the %%ghostnexus magic in any cell

python — notebook cell
%%ghostnexus --task resnet-training
import torch
import torchvision

model = torchvision.models.resnet50(pretrained=False).cuda()
x = torch.randn(32, 3, 224, 224).cuda()

with torch.no_grad():
    out = model(x)

print(f"GPU: {torch.cuda.get_device_name(0)}")
print(f"Output shape: {out.shape}")

Magic command options

--task NAMEJob name in the dashboard (default: notebook-job)
--timeout MINUTESMax wait time before timeout (default: 30)
--no-logsHide output logs, only show status badge

Note on environment

The cell runs in a fresh Python environment on the GPU node. To install packages, add import subprocess; subprocess.run(['pip', 'install', 'torch']) at the top of your cell or pre-install them on a custom base image.

Ready to integrate?

Get your API key from the dashboard, add it to your secrets, and start dispatching GPU jobs in minutes.