I do not make little brochure sites.
01> waking the machine...
02> checking the stack...
03> reconnecting the wild parts...
04> redis online
05> postgres online
06> telegram gateways online
07> ai pipelines unstable
08> proxy mesh synchronized
09> docker daemon still being a dick
10> uptime: ugly but alive
11> mental state: shipping
monument state
STRAPON.TECH
i do not live in one lane.
i ship the messy stuff too.
if it can be built, wired, deployed, or rescued - i can probably do it.
the stack is wide because the work is wide.
operator profile
60+ repos, a stupidly broad stack, and zero fear of weird, ugly, high-stakes work.
proof / no cosplay
this is where the flex turns into evidence
ambient instability
identity / no-cosplay layer
no one-lane
no cute
roleplay.
I do not need to pretend I am only frontend, only backend, or only infra. The job changes, so I change with it.
I build systems people actually use.
Sometimes the whole product stack.
Frontend.
Backend.
Data.
ML / AI.
Infra.
Bots.
Payments.
VPN / proxy / ops.
If the task is weird, even better.
"sometimes this stops being development and starts being owning the whole damn machine."
chaos / big-range layer
too much
for one
lane.
The point is not polish for polish's sake. The point is range: products, pipelines, infra, AI, deployments, and the glue that keeps all of it from falling apart.
active payload names
unstable visibility
fake operator feed
11commit 9c1fd2 - hotfix/retry-mesh
12deploy/main -> fra1-vps-03
13queue.redis backlog: 142
14worker.ai.infer status: throttled
15ssh root@hel1-core active
16telegram.event stream burst x48
17branch/unstable-operator merged
18proxy.mesh sync delta: 02
19postgres vacuum overdue
20ci/cd pipeline rerun with spite
21api /payments/reconcile 200
22scraper cluster woke up again
23tg-gift-worker lag detected
24payments-sync resumed after backpressure
25relay-beat heartbeat nominal
26anti-abuse model confidence dipped
27fallback-gateway warmed in iad-01
28orchestration-layer deferred cleanup
branch rot / service pressure
origin/main
feature/mesh-retry
ops/panic-hotfix
good.
the queue survived. the product survived. i survived.
production remembers everything, so ship like it.
some systems do not sleep. neither do the people running them.
ego / earned by shipping
most developers ship one layer.
i ship the whole stack.
people call it overengineering.
then the edge cases arrive.
production is the only serious review.
pretty code that dies in incidents is decorative nonsense.
speed matters.
breadth matters.
shipping matters.
legend / operational mass
not just a dev.
a guy who
actually ships.
The myth works when the surface area is huge but still coherent. Services, queues, workers, products, clients, weird requests - it all needs to stay alive.
service graph fragments
all paths remain live
tg-gift-worker
dependency mesh active
payments-sync
dependency mesh active
queue-recovery
dependency mesh active
relay-beat
dependency mesh active
anti-abuse
dependency mesh active
node-health
dependency mesh active
orchestration-layer
dependency mesh active
ai-routing
dependency mesh active
fallback-gateway
dependency mesh active
proxy-mesh
dependency mesh active
stars-fulfillment
dependency mesh active
packet-telemetry
dependency mesh active
incident memory
incident/4421: relay mesh entered degraded visibility
incident/4421: fallback-gateway promoted in fra1
incident/4422: stars-fulfillment queue saturation avoided
incident/4423: anti-abuse false-positive cluster contained
incident/4424: ai-routing drift acknowledged, not fatal
atmospheric throughput
the machine / private internet empire
production
is the only
real judge.
This is not a polite portfolio. It is a control room for the kind of work where frontend, backend, data, infra, and AI all touch each other.
github repos
60+
stack width
wide
client trust
friends first
delivery mode
full send
more-stars / commerce core
Telegram commerce infra, payment routing, event flow discipline, and the boring stuff that makes money move without drama.
relay / proxy control plane
Fallback-aware routing for when upstream reality gets flaky and you still need the thing to work.
analytics / product telemetry
Turns user movement, bot events, and operator noise into product signal that is actually useful.
orchestration-layer / queue-recovery
Keeps workers, retries, and fallback queues from turning incidents into folklore.
cluster map
observability alerts
case vault / actual shipped work
proof over
vibes.
The style is loud on purpose. The receipts stay louder. These are the kinds of projects I build when the task is real and the edge cases are annoying.
case/01
more-stars / telegram commerce
built for moving money and keeping ops sane
Service for selling Telegram stars and gifts with the boring parts handled properly: routing, reliability, and a product flow that does not fall apart.
case/02
unlimy / vpn service
private internet plumbing with real constraints
My own VPN service. This is infra, routing, uptime, and customer-facing delivery wrapped into one thing that has to just work.
case/03
unlimy relay / mtproto config delivery
a clean way to ship configs without hand-holding
Auto-issued MTProto configs through a relay layer. Less ceremony, more reliable distribution.
case/04
asic hub / vpn operations
keeps the weird hardware side usable
A VPN hub for ASIC operations, where operational simplicity matters more than pretty language.
case/05
unlimy flow / ai posts
model-driven posting pipeline with an actual use case
AI model for generating posts. Useful when the job is not just making text, but making something that can actually produce output.