Welcome to GNUS AI, the decentralized GPU network that's revolutionizing how developers access computing power. This guide will walk you through setting up your first workload, understanding our hybrid blockchain architecture, and maximizing your efficiency on the Super Genius Network.
What is GNUS AI?
GNUS AI is a decentralized network of GPU resources that allows developers to run AI workloads at a fraction of the cost of traditional cloud providers. By leveraging idle devices worldwide—from smartphones and gaming consoles to high-end servers—we create a global supercomputer that's both cost-effective and environmentally friendly.
Our architecture uses a patented hybrid blockchain system. It combines a fast Directed Acyclic Graph (DAG) for transactions with a secure blockchain for validation. This ensures that your computations are verified, secure, and incredibly fast.
Key Benefits
1. Cost Efficiency
Traditional cloud providers like AWS or GCP charge a premium for GPU instances. GNUS AI cuts these costs by up to 80% by utilizing idle compute power. You only pay for what you use, with no hidden infrastructure fees.
2. High Performance & Low Latency
By distributing workloads across a global network, we minimize latency. Computation happens closer to where it's needed, making it ideal for real-time AI inference and edge computing applications.
3. Privacy & Security
Data privacy is at our core. Your data is processed across a federated network where no single node has access to the complete dataset unless explicitly allowed. All results are verified using zkSNARK proofs, ensuring correctness without compromising privacy.
Setting Up Your First Workload
Step 1: Install the SDK
First, you'll need to add our SDK to your project. We support npm, yarn, and bun.
bun add @gnus-ai/sdk
Step 2: Initialize the Client
Create a new instance of the GNUS client using your API key. You can obtain an API key from the GNUS Dashboard.
import { GNUSClient } from "@gnus-ai/sdk";
const client = new GNUSClient({
apiKey: "your-api-key-here",
network: "mainnet",
});
Step 3: Define Your Workload
Define the AI model and the parameters for your task. GNUS AI supports various frameworks, including TensorFlow, PyTorch, and ONNX.
const workload = {
model: "llama-3-8b",
input: "Analyze this dataset for anomalies...",
requirements: {
minVram: "8GB",
precision: "fp16",
},
};
Step 4: Run and Monitor
Submit your job to the network. The SDK provides real-time updates on the progress and status of your computation.
const result = await client.run(workload);
console.log("Computation complete:", result.data);
Understanding the Dashboard
The GNUS Dashboard is your central hub for managing resources. Here you can:
- Monitor active jobs and historical performance.
- Manage your $GNUS token balance and payouts.
- View detailed analytics on cost savings and compute efficiency.
- Configure security settings and API access.
Best Practices for New Users
- Optimize Model Size: For mobile nodes, use quantized versions of models (e.g., 4-bit or 8-bit) to ensure faster distribution and execution.
- Batch Tasks: When running inference on large datasets, batching your requests can significantly reduce network overhead.
- Use the Right Tags: Tag your workloads correctly to ensure they are routed to nodes with the appropriate hardware capabilities.
Conclusion
Getting started with GNUS AI is the first step toward a more open, efficient, and decentralized AI future. By following this guide, you've learned the basics of our network and how to run your first task. Stay tuned for our advanced guides on performance optimization and custom model deployment.
Need help? Join our Discord community or check out the full documentation.

LinkedIn