strapon.tech
impossible? perfect.
no brochure. just proof.friends only / real work only
>booting strapon.tech_

01> waking the machine...

02> checking the stack...

03> reconnecting the wild parts...

04> redis online

05> postgres online

06> telegram gateways online

07> ai pipelines unstable

08> proxy mesh synchronized

09> docker daemon still being a dick

10> uptime: ugly but alive

11> mental state: shipping

monument state

STRAPON.TECH

i do not live in one lane.

i ship the messy stuff too.

if it can be built, wired, deployed, or rescued - i can probably do it.

the stack is wide because the work is wide.

operator profile

60+ repos, a stupidly broad stack, and zero fear of weird, ugly, high-stakes work.

backendinfratelegramai workflowsshipping fast

proof / no cosplay

this is where the flex turns into evidence

0+
github repos live
60+ GitHub repos
frontend to infra
ML / AI / data
VPN / proxy / deploys

ambient instability

identity / no-cosplay layer

no one-lane
no cute
roleplay.

I do not need to pretend I am only frontend, only backend, or only infra. The job changes, so I change with it.

01

I do not make little brochure sites.

02

I build systems people actually use.

03

Sometimes the whole product stack.

04

Frontend.

05

Backend.

06

Data.

07

ML / AI.

08

Infra.

09

Bots.

10

Payments.

11

VPN / proxy / ops.

12

If the task is weird, even better.

"sometimes this stops being development and starts being owning the whole damn machine."

chaos / big-range layer

too much
for one
lane.

The point is not polish for polish's sake. The point is range: products, pipelines, infra, AI, deployments, and the glue that keeps all of it from falling apart.

active payload names

unstable visibility

more-starsunlimyrelayanalyticsvpn meshtransformersworkerstg infraorchestrationmonitoringproxy routingpayment systemsautomation layersdistributed pipelinesscrapersexperimentsside-projectsunfinished weapons

fake operator feed

11commit 9c1fd2 - hotfix/retry-mesh

12deploy/main -> fra1-vps-03

13queue.redis backlog: 142

14worker.ai.infer status: throttled

15ssh root@hel1-core active

16telegram.event stream burst x48

17branch/unstable-operator merged

18proxy.mesh sync delta: 02

19postgres vacuum overdue

20ci/cd pipeline rerun with spite

21api /payments/reconcile 200

22scraper cluster woke up again

23tg-gift-worker lag detected

24payments-sync resumed after backpressure

25relay-beat heartbeat nominal

26anti-abuse model confidence dipped

27fallback-gateway warmed in iad-01

28orchestration-layer deferred cleanup

branch rot / service pressure

origin/main

feature/mesh-retry

ops/panic-hotfix

_noise floor dropped / signal stays

good.

the queue survived. the product survived. i survived.

production remembers everything, so ship like it.

some systems do not sleep. neither do the people running them.

ego / earned by shipping

01

most developers ship one layer.

02

i ship the whole stack.

03

people call it overengineering.

04

then the edge cases arrive.

05

production is the only serious review.

06

pretty code that dies in incidents is decorative nonsense.

07

speed matters.

08

breadth matters.

09

shipping matters.

legend / operational mass

not just a dev.
a guy who
actually ships.

The myth works when the surface area is huge but still coherent. Services, queues, workers, products, clients, weird requests - it all needs to stay alive.

service graph fragments

all paths remain live

tg-gift-worker

dependency mesh active

payments-sync

dependency mesh active

queue-recovery

dependency mesh active

relay-beat

dependency mesh active

anti-abuse

dependency mesh active

node-health

dependency mesh active

orchestration-layer

dependency mesh active

ai-routing

dependency mesh active

fallback-gateway

dependency mesh active

proxy-mesh

dependency mesh active

stars-fulfillment

dependency mesh active

packet-telemetry

dependency mesh active

incident memory

incident/4421: relay mesh entered degraded visibility

incident/4421: fallback-gateway promoted in fra1

incident/4422: stars-fulfillment queue saturation avoided

incident/4423: anti-abuse false-positive cluster contained

incident/4424: ai-routing drift acknowledged, not fatal

atmospheric throughput

packet-noisepresent
crt-humimplied
server-roombreathing
distant alarmsmuted

the machine / private internet empire

production
is the only
real judge.

This is not a polite portfolio. It is a control room for the kind of work where frontend, backend, data, infra, and AI all touch each other.

github repos

60+

stack width

wide

client trust

friends first

delivery mode

full send

more-stars / commerce core

Telegram commerce infra, payment routing, event flow discipline, and the boring stuff that makes money move without drama.

latency 18msworkers 07risk low

relay / proxy control plane

Fallback-aware routing for when upstream reality gets flaky and you still need the thing to work.

mesh syncedfailover armedegress green

analytics / product telemetry

Turns user movement, bot events, and operator noise into product signal that is actually useful.

signals 144dashboards liveevents noisy

orchestration-layer / queue-recovery

Keeps workers, retries, and fallback queues from turning incidents into folklore.

drain 91%retries pacedincident debt low

cluster map

observability alerts

wireguard edge synchronizedok
nginx reload completedok
worker pool stabilizedok
grafana ghost panel aliveok
postgres locks acceptableok
ai pipeline drift detectedwatch
queue-recovery entering watch stateok
node-health mismatch in waw-edgeok

case vault / actual shipped work

proof over
vibes.

The style is loud on purpose. The receipts stay louder. These are the kinds of projects I build when the task is real and the edge cases are annoying.

case/01

more-stars / telegram commerce

built for moving money and keeping ops sane

Service for selling Telegram stars and gifts with the boring parts handled properly: routing, reliability, and a product flow that does not fall apart.

telegram botpaymentscommerce flowdelivery

case/02

unlimy / vpn service

private internet plumbing with real constraints

My own VPN service. This is infra, routing, uptime, and customer-facing delivery wrapped into one thing that has to just work.

vpninfraroutingsupport

case/03

unlimy relay / mtproto config delivery

a clean way to ship configs without hand-holding

Auto-issued MTProto configs through a relay layer. Less ceremony, more reliable distribution.

mtprotorelayautomationconfig delivery

case/04

asic hub / vpn operations

keeps the weird hardware side usable

A VPN hub for ASIC operations, where operational simplicity matters more than pretty language.

vpnopshardwareadmin flow

case/05

unlimy flow / ai posts

model-driven posting pipeline with an actual use case

AI model for generating posts. Useful when the job is not just making text, but making something that can actually produce output.

aicontentgenerationworkflow
_terminal quiet, signal loud

need something nasty built?

friends, clients, weirdos - if it needs shipping, we talk.