notpanel
ServicesPricingFAQGiveaway
notpanel

The fastest and most affordable SMM panel. Trusted by 1M+ users worldwide.

Product

  • Services
  • Pricing
  • Why NotPanel
  • About
  • Developers
  • Blog
  • FAQ

Legal

  • Terms of Service
  • Privacy Policy
  • Refund Policy

Connect

  • Contact Us
  • support@notpanel.com

© © 2026 NotPanel. All rights reserved.

All posts
Reseller business April 27, 2026· 10 min read

Bulk SMM orders: how agencies process 100,000+ orders per day

Operational patterns for high-volume SMM ordering. Spreadsheet-to-API workflows, idempotency at scale, deduplication, monitoring, and how to keep a six-figure-per-day pipeline running without losing track of orders.


Once you cross a few thousand orders per day, the manual flow stops working. Spreadsheets accumulate. Copy-paste errors creep in. You stop being able to tell which orders went through and which silently failed. Customer support gets buried answering questions you can't quickly answer because your records and the panel's records have diverged.

This is the operational pattern that scales. It's what agencies running 100K+ orders per day actually do. None of it is novel — it's just discipline applied consistently.

Where this advice comes from

We see the upstream side of this every day. The agencies running 100K/day look fundamentally different from the ones running 5K/day. Not because they have more clients — because they have a real database and a real submission pipeline. Once those are in place, scaling further is mostly buying more rate-limit headroom.

Stop using spreadsheets

First switch is from spreadsheets to a database. Spreadsheets work for small order volumes because the operator can keep the whole picture in their head. Past a few thousand rows the failure modes appear:

  • Two operators editing the same sheet overwrite each other's changes.
  • A row gets accidentally deleted; the order goes through but is no longer tracked.
  • Filter and sort changes leave the sheet in an inconsistent state.
  • Numbers in cells silently switch from "1000" (text) to 1,000 (number) and break formulas.

A small SQLite or PostgreSQL database with three tables — customers, orders, transactions — solves all of these. Setup time is an afternoon. Reliability improvement is permanent.

Schema for a high-volume reseller

-- Customers (downstream buyers)
CREATE TABLE customers (
  id            BIGSERIAL PRIMARY KEY,
  email         TEXT NOT NULL UNIQUE,
  display_name  TEXT,
  created_at    TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- Orders we forwarded to the upstream panel
CREATE TABLE orders (
  id                  BIGSERIAL PRIMARY KEY,
  customer_id         BIGINT NOT NULL REFERENCES customers(id),
  upstream_order_id   TEXT,
  request_id          UUID NOT NULL UNIQUE,  -- idempotency key
  service_id          INT NOT NULL,
  link                TEXT NOT NULL,
  quantity            INT NOT NULL,
  cost                NUMERIC(20,8) NOT NULL,
  retail_charge       NUMERIC(20,8) NOT NULL,
  status              TEXT NOT NULL DEFAULT 'pending',
  start_count         INT,
  remains             INT,
  created_at          TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  updated_at          TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX orders_status_idx ON orders(status);
CREATE INDEX orders_upstream_idx ON orders(upstream_order_id);
CREATE INDEX orders_customer_idx ON orders(customer_id, created_at DESC);

Three details matter here:

  • request_id is UUID with a UNIQUE constraint. This is how you make the upstream API call idempotent — if you retry, the DB rejects the duplicate row before you double-charge yourself.
  • cost and retail_charge are separate. Wholesale cost is what you paid the upstream; retail charge is what you billed the customer. Keep them separate from day one. You'll want them for margin reporting later.
  • NUMERIC(20,8) for money. Float drift over a million orders adds up to meaningful real-world dollars.

The submission pipeline

Every order goes through a fixed sequence:

  1. Customer requests an order via your storefront.
  2. You validate the request (link parses, quantity within service bounds, customer balance covers retail charge).
  3. You insert a row in your orders table with status 'pending' and a fresh request_id. Charge the customer's balance inside the same transaction.
  4. Async worker picks up pending orders, calls the upstream panel's add action with the request_id, updates status to 'submitted' (with the upstream's order ID stored in upstream_order_id).
  5. Webhook delivery or scheduled status polling updates downstream rows when the upstream order changes state.
  6. Terminal states (completed, partial, cancelled, refunded) trigger customer-facing notifications and any refund logic.
Every order has a row in your database before any external API call. That's the entire trick.

That's what this pipeline accomplishes that ad-hoc submission can't. If the upstream API fails, retries, or returns ambiguously, you have a record. You're never in the position of "the customer was charged but I have no idea if the order went through".

Webhook handling at scale

At 100K+ orders per day, you'll see bursts of hundreds of webhooks per minute when large orders complete in batches. Two patterns:

  • Receive cheap, process async. The webhook handler writes the event to a queue (Redis Stream, BullMQ, PostgreSQL NOTIFY) and returns 200 immediately. Don't update the DB in the handler. Let a worker drain the queue.
  • Deduplicate by event ID. Webhooks have at-least-once delivery semantics. The same event can arrive twice. Track processed event IDs in Redis with a TTL longer than the panel's retry window. Reject duplicates.

Status polling as a backstop

Even with webhooks wired correctly, run a periodic reconciliation that polls status for orders stuck in non-terminal states too long. Reasons:

  • Webhook deliveries can be lost or rejected by your endpoint.
  • The upstream panel might miss firing a webhook due to their own bugs.
  • Your event-ID dedup might mistakenly reject a legitimate event.

A reconciliation pass that runs every 15–30 minutes, queries the upstream's batch-status endpoint for orders that haven't moved status in >1 hour, and corrects discrepancies, catches 100% of cases webhooks miss. Cost is small (one batch call per tick). Operational confidence is large.

Bulk submission patterns

For genuinely bulk operations (a campaign placing 5,000 orders against a list of accounts), don't naively iterate and submit. Three things go wrong:

  • You exhaust per-IP rate limit and start getting 429s.
  • If your worker crashes mid-iteration, you don't know how far you got.
  • If any one order fails, the whole batch becomes unclear.

Instead: write all 5,000 rows to your database first with status 'pending'. Then have your worker pool drain the pending queue respecting rate limits. Each order is independent. Failures are isolated to individual rows. Restarts pick up where they left off.

Reader poll

Of high-volume integrators on NotPanel, which monitoring metric trips the most useful alerts?

Submission lag (created → submitted)41%
Failed-order rate per service28%
Status sync lag (upstream → us)19%
Margin erosion per service12%
Survey of 84 integrations doing >10K orders/day, March 2026.

Monitoring at this scale

At 100K+ orders per day, you can't watch everything manually. Minimum viable monitoring:

  • Submission lag. Time from order created to order submitted upstream. Should be seconds. If it climbs past minutes, your worker is behind.
  • Status sync lag. Time from upstream completion to downstream notification. Webhooks should be instant. If lag is measured in minutes, something's stuck.
  • Failed-order rate. Percentage ending in a non-completed terminal status. Establish a baseline per service. Alert on significant deviation.
  • Margin per service. Wholesale cost vs retail charge, aggregated daily. Helps you spot when an upstream price change has eroded your margin without your retail price being adjusted.
100K+
Orders/day
The threshold
30 min
Reconciliation tick
Catches webhook gaps
<5s
Submission lag
Healthy worker
1 row
Per order in your DB
Before any API call

Customer-facing transparency

Your customers will ask "where's my order" thousands of times. Make the answer self-service:

  • Dashboard with their order history and live status.
  • Email on terminal status changes (completion, failure, refund).
  • Order-detail page with upstream status code, your retail charge, available refill/cancel actions.

Every "where's my order" ticket your customers don't need to file is time you don't have to spend on support. At scale this is the difference between a profitable agency and one that bleeds margin to support staffing.

What this scales to

The pattern scales linearly to the limits of your wholesale source's rate limits — typically several hundred orders per second. Past that you're either coordinating across multiple keys (helps for per-key, not per-IP) or you've outgrown the wholesale source itself and need to negotiate higher limits.

For most agencies, the bottleneck never becomes infrastructure. It becomes upstream pricing, customer acquisition, or support capacity. The technical pipeline above is the foundation that lets those other constraints become the binding ones.

Continue reading

Tactics

Choosing the right SMM panel: 12 signals of a real source vs a reseller

Engineering

Webhook signing for SMM panels: HMAC-SHA256 in production

Engineering

Why we rebuilt NotPanel on Next.js + PostgreSQL (and what we learned)