GPU compute wherever
you already work
Run GPU jobs directly from your CI pipeline or Jupyter notebook — no infrastructure changes required.
GitHub Actions
CI/CD GPU jobs
Dispatch GPU training, inference, or benchmark jobs directly from your CI workflow with one step.
Jupyter Magic
Notebook GPU execution
Run any notebook cell on a remote GPU with a single %%ghostnexus magic command.
GitHub Actions
The ghostnexus/ghostnexus-run action submits your script to a GhostNexus GPU, streams logs back to your CI run, and fails the workflow if the job fails.
1Add your API key as a repository secret
Go to Settings → Secrets → Actions and add:
GHOSTNEXUS_API_KEY = your key from the dashboard
2Reference a script file in your repo
# .github/workflows/train.yml
name: GPU Training
on: [push]
jobs:
train:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run on GPU
uses: ghostnexus/ghostnexus-run@v1
with:
api-key: ${{ secrets.GHOSTNEXUS_API_KEY }}
task-name: train-${{ github.sha }}
script-path: scripts/train.py
timeout-minutes: 603Or write an inline script
- name: GPU benchmark
uses: ghostnexus/ghostnexus-run@v1
with:
api-key: ${{ secrets.GHOSTNEXUS_API_KEY }}
task-name: benchmark
script: |
import torch
t = torch.randn(4096, 4096).cuda()
result = torch.mm(t, t)
print(f"GPU: {torch.cuda.get_device_name(0)}")
print(f"VRAM: {torch.cuda.memory_allocated()/1e9:.1f} GB")Available inputs
| Input | Required | Description |
|---|---|---|
| api-key | Yes | Your GhostNexus API key (use secrets.GHOSTNEXUS_API_KEY) |
| task-name | No | Job name shown in dashboard (default: ci-job) |
| script-path | Either/or | Path to a .py file in your repository |
| script | Either/or | Inline Python script to run |
| timeout-minutes | No | Max wait time in minutes (default: 30) |
Available outputs
job-idGhostNexus job identifierstatuscompleted | failed | timeoutcost-creditsCredits consumed by the jobduration-secondsActual GPU runtime in secondsJupyter Magic Commands
Install the ghostnexus-magic package to add a %%ghostnexus cell magic to any JupyterLab or classic Jupyter notebook. The cell runs on a remote GPU; output streams back inline.
1Install and configure
# Install
pip install ghostnexus-magic
# In your notebook
%load_ext ghostnexus_magic
%ghostnexus_config --api-key YOUR_API_KEY2Use the %%ghostnexus magic in any cell
%%ghostnexus --task resnet-training
import torch
import torchvision
model = torchvision.models.resnet50(pretrained=False).cuda()
x = torch.randn(32, 3, 224, 224).cuda()
with torch.no_grad():
out = model(x)
print(f"GPU: {torch.cuda.get_device_name(0)}")
print(f"Output shape: {out.shape}")Magic command options
--task NAMEJob name in the dashboard (default: notebook-job)--timeout MINUTESMax wait time before timeout (default: 30)--no-logsHide output logs, only show status badgeNote on environment
The cell runs in a fresh Python environment on the GPU node. To install packages, add import subprocess; subprocess.run(['pip', 'install', 'torch']) at the top of your cell or pre-install them on a custom base image.
Ready to integrate?
Get your API key from the dashboard, add it to your secrets, and start dispatching GPU jobs in minutes.