Blog

  • What’s Actually Broken

    Amazon’s weekly operations meeting in March reportedly focused on a “trend of incidents” characterised by “high blast radius” and “Gen-AI assisted changes.” The Financial Times, which saw the briefing note, reported that AI-generated code had been implicated in a series of outages — including one that took down Amazon’s entire e-commerce website for several hours. Amazon’s response was to deny the problem existed, which is the corporate equivalent of the AI itself: confidently wrong and hoping nobody checks. James Gosling, the creator of Java, who left AWS in 2024, was less diplomatic. He observed that the company’s AI-driven restructuring had “demolished” the teams responsible for infrastructure stability, and that the ROI analysis behind the decision was, in his words, “disastrously shortsighted.” One does not need a diagnostic engine to identify the fault here. A company replaced the engineers who understood its systems with a technology that does not, and the systems fell over. The circuit breaker that the AI removed — the one it classified as “redundant” — had been added after a previous outage. The AI could not distinguish a safety mechanism from dead code, because it had no model of the system. It had statistics. Statistics told it the breaker rarely fired. A model would have told it why.

    This is the difference between machine learning and model-based reasoning, and it is the difference that this post — and the toolchain I am releasing today — is about.

    An Unexpected Reception

    Yesterday’s post announcing qbf-designer, a tool for exact digital circuit synthesis via Quantified Boolean Formula solving, generated rather more attention than I had anticipated. Twenty-two thousand LinkedIn impressions, a hundred-odd reactions, and five hundred profile views in twenty-four hours, for a post about problems at the second level of the polynomial hierarchy and FPGA technology mapping. One concludes that there is an audience for work that produces correct answers, even — or perhaps especially — in an era when the prevailing technology cannot reliably tell you which end of a circuit is up.

    Dusting Off the Arsenal

    To continue with my plans for commercialising formal methods for EDA through Llama Logic Corporation, I have to excavate, modernise, and release the full inventory of tools and concepts I have built over nearly two decades. There are many reusable components in this stack — logic representations, solver bindings, encoding schemes, diagnostic algorithms — and they need to be cleaned up, documented, and made available. The qbf-designer release was the first. Today’s is the second.

    Today I am releasing LyDiA, a language and toolchain for Model-Based Diagnosis. LyDiA was the core of my doctoral research at Delft University of Technology. I will not be using LyDiA itself going forward — the modern llogic packages have fixed all of its imprecise notions and provide a cleaner foundation for everything I am building — but LyDiA was where it all started. It was my first serious work on the diagnosis of circuits, and it contains ideas and algorithms that remain relevant. It deserves to be available.

    Model-Based Diagnosis in 15 Seconds

    The demo takes two inputs. The model (2adder-weak.sys) describes a two-bit full adder — a hierarchical composition of half-adders built from XOR and AND gates. Every gate has a Boolean health variable: true means the gate works correctly, false means it is faulty and its output is unconstrained. We do not specify how a gate fails, only that its output can no longer be trusted. This is called a weak fault model.

    The observation (2adder.obs) records what actually happened: specific values on the inputs and outputs of the circuit that are inconsistent with correct behaviour. Something is broken. We do not know what. The diag command hands both files to the GOTCHA engine — which computes all minimal sets of component failures that explain the discrepancy. Not one guess. Not the most likely answer. Every combination of gate failures that is logically consistent with the model and the observation, with no redundancy.

    The fm command lists the results: six double-fault diagnoses, each a minimal set of gates whose simultaneous failure is sufficient to produce the observed misbehaviour. For example, d4 = { !FA.HA1.X.h, !FA.O.h } means the XOR gate in the first half-adder and the OR gate are both broken. There is no single-fault explanation — at least two gates must be faulty, and the engine has proven this by exhaustive enumeration.

    Why Circuits?

    Writing software to diagnose a fabricated IC does not make practical sense. You would use ATPG and scan chains for that. We use digital circuits as benchmarks because they have the properties that matter for diagnosis research: compositional structure, many components, well-defined fault models, and known-correct reference behaviour. These are the same properties that make diagnosis hard in complex engineered systems generally. This is why the ISCAS-85 suite has been the standard MBD benchmark for thirty years.

    Where diagnosis does apply directly in EDA is design verification. Suppose an engineer places a NAND gate instead of an AND gate for the carry computation in the adder above. The circuit passes some tests but fails on specific input vectors. The diagnostic engine, given the intended specification and the observed misbehaviour, will isolate the carry gate as the faulty component — even if the designer has never seen this particular mistake before, even if there are multiple simultaneous design errors. It reasons from the structure of the circuit, not from a database of past bugs.

    The Modelling Problem

    During my early attempts at commercialisation, I encountered a pattern that I suspect anyone in formal methods has seen. People looked at LyDiA diagnosing circuits and said: “Wonderful. Can it diagnose my HVAC system? My chemical plant? My supply chain?” And so they tried to model non-circuits as circuits, and things did not work, because the difficulty of modelling is the hard part.

    Circuit diagnosis is tractable in part because digital circuits have a natural, compositional, Boolean structure. An AND gate is an AND gate. An HVAC system is a tangle of continuous dynamics, feedback loops, thermal gradients, and human behaviour. Cramming that into a Boolean framework requires heroic abstraction, and the resulting models are either too coarse to be useful or too large to be solvable. The aerospace fuel system model included in LyDiA — with its typed fault modes for leaking tanks, stuck sensors, and degraded pumps — hints at what multi-valued modelling can achieve, but it remains a toy compared to the real thing.

    That said, LyDiA was never only about circuits. The distribution includes models of the N-queens problem, map colouring, Sudoku, and SEND+MORE=MONEY — general constraint satisfaction problems expressed in the same language. The diagnostic framework is, at its core, a constraint solver with a notion of health variables. This generality is both its strength and its curse: it can express anything, but making it useful for a specific domain requires domain expertise that no tool can substitute.

    What LyDiA Got Wrong: Probability

    LyDiA assigns fault probabilities to components — each gate gets a prior like 0.99 healthy, 0.01 faulty — but the probabilistic reasoning was never worked out correctly. The probabilities were treated as independent priors, multiplied together to rank diagnoses, with no rigorous account of how observations update beliefs or how correlations between faults propagate through the system.

    The correct formulation turns out to be a #P problem — a counting problem. To compute the exact posterior probability of a diagnosis, you need to count the satisfying assignments of the diagnostic formula: how many ways can the internal signals of the circuit be assigned such that the model, the observation, and a given fault assumption are all consistent? The probability of a diagnosis is the ratio of its satisfying assignment count to the total. This is model counting, and it is #P-complete — harder than NP.

    One consequence is that all diagnostic probabilities are rationals. They are ratios of integers — counts of discrete satisfying assignments. This has some puzzling implications for the relationship between fault probability and physical failure rates that I have not yet fully worked out.

    There is also a quantum angle. Faults are inherently stochastic — a gate either works or it does not, and before you test it, the fault state is indeterminate in precisely the sense that a qubit is indeterminate before measurement. I showed in earlier work that placing health qubits in superposition and propagating them through a quantum circuit that mirrors the classical circuit under diagnosis computes the full probability distribution over all diagnoses simultaneously. This connects to von Neumann’s foundational work on the relationship between logic and probability. The practical implication is Grover’s algorithm: a quadratic speedup for searching the diagnostic space. I need to finish this work and implement a proper Grover-based diagnostic engine. It is on the list.

    Why Machine Learning Cannot Do This

    In February, a company called Algorhythm Holdings — formerly a manufacturer of karaoke machines, with a market capitalisation of six million dollars — announced that its AI platform could “optimise” freight logistics, scaling volumes by 300–400% without adding staff. The announcement wiped seventeen billion dollars off U.S. transportation stocks in a single day. C.H. Robinson fell 15%. RXO fell 20%. The Russell 3000 Trucking Index dropped 6.6%. DHL, DSV, and Kuehne+Nagel followed in Europe. All of this because a former karaoke company claimed, in effect, to have solved optimal planning — a problem that is PSPACE-complete. If Alan Turing and Stephen Cook could be reached for comment, I suspect they would have questions.

    The same magical thinking pervades “AI for diagnostics.” A machine learning model trained on historical failures will recognise patterns it has seen before. Show it a novel fault — a combination that never appeared in the training data — and it has nothing to generalise from. It will either misclassify the failure or express high confidence in a wrong answer. This is not a limitation that more data or a larger model can fix. It is a structural property of inductive inference: you cannot learn what you have not observed, and complex systems fail in ways that are combinatorially vast and fundamentally unpredictable from examples alone.

    Model-based diagnosis does not have this problem. If you have a model of the system, you can diagnose faults you have never observed, in configurations you have never tested, because the reasoning is deductive rather than inductive. The SAT solver asks: is there an assignment of health variables that is consistent with the model and the observations? The answer is provably correct with respect to the model. This is why NASA uses model-based diagnosis for spacecraft and why the automotive industry uses it for on-board diagnostics. Nobody uses a neural network to diagnose a flight-critical system. The neural network might get it right 95 percent of the time. The other 5 percent is a smoking crater.

    What’s Next

    The modern diagnosis packages in llogic have addressed all of LyDiA’s imprecisions — cleaner encodings, correct probabilistic inference, proper multi-valued support — but those are a story for a separate post.

    There is also Lydia-NG, a framework I built that extends model-based diagnosis to analog systems using a built-in SPICE simulation engine. Rethinking Lydia-NG connects us directly to the analog side of EDA — a domain where formal methods have barely made an appearance and where the tools are, to put it charitably, showing their age.

    And that is the longer ambition. Cadence Virtuoso dates from 1991 — thirty-five years old. Vivado is newer (2012), but its place-and-route lineage descends from NeoCAD, acquired in 1995, and its synthesis from MINC, acquired in 1998. Synopsys Design Compiler has been around since the late 1980s. The EDA industry is running on architectural foundations that predate the web browser. These tools work — in the sense that a 1991 Toyota also works — but the algorithms inside them are heuristic, the interfaces are hostile, and nobody has rethought the fundamentals in decades.

    The goal of Llama Logic Corporation is to challenge this. Modern EDA with proper AI-augmented formal methods — analog, digital, and FPGA. New languages. New solvers. New tools. Not “AI for EDA” in the Silicon Valley sense of wrapping an LLM around Verilog and hoping for the best, but the real thing: algorithms with correctness guarantees, backed by the mathematical foundations that already exist and that the industry has been too comfortable to adopt.

    In the next instalment, I will demonstrate qbf-designer doing FPGA technology mapping — covering a small circuit with k-input Look-Up Tables. The formal methods stack is growing. The software works. It does not hallucinate.

    The repository: LyDiA — language and toolchain for Model-Based Diagnosis.

  • Diagnosing Circuits into Existence

    Cadence recently unveiled ChipStack AI, which El Reg memorably described as “vibe coding for chips.” The idea is that an LLM agent will design your next processor for you, provided you don’t mind the occasional hallucinated transistor. One Reg commenter recalled Jensen Huang’s declaration that nobody needs to learn programming anymore, and suggested he try designing his next GPU with it. Quite. Meanwhile, a water desalination company spent $200,000 on AI-generated engineering advice that turned out to be — and I use the technical term — wrong. They then built a second AI to filter out the nonsense from the first one, which is the silicon valley equivalent of hiring a second drunk to drive the first one home. One does wonder what the industry will achieve once it sobers up. In the meantime, I have been doing something unfashionable: using mathematics to design circuits that are provably correct.

    Today I am releasing qbf-designer, a tool for exact digital circuit synthesis from arbitrary component libraries via Quantified Boolean Formula solving. It is the top of a dependency stack that has also been modernised and released: llogic for logic representation, transformation, and solving; lcfgen for generating circuit primitives used both in the QBF encoding itself and as benchmark specifications; and a collection of solver bindings — pydepqbf, pylgl, pyllq, and pycudd — that connect the Python layer to the C/C++ solvers doing the actual heavy lifting. The software works. It synthesises provably minimal circuits from specifications. It found a five-gate full-subtractor that improves on the seven-gate textbook design. It does not hallucinate.

    Now, explaining what “provably minimal circuit design” actually means turns out to require rather more than a single blog post — so this is the first in a series. The short version: given a functional specification (“I want a circuit that adds two numbers”) and a bag of components (“here are some AND, OR, and XOR gates”), find the smallest circuit that does the job. The practical application is technology mapping for FPGAs, where you need to cover a circuit with k-input Look-Up Tables using as few LUTs as possible. The silicon is already on the chip and you have already paid for it — every LUT you save is space freed for more logic, letting you fit a larger design onto the same device. Current industry tools — Vivado, Yosys — use heuristics for this. qbf-designer gives you the exact answer, at least for sub-circuits small enough to chew on. Early results are promising: on a 2-bit comparator mapped to 3-LUTs, the solver finds a 5-LUT implementation where heuristic methods produce 6. One does not need to be a venture capitalist to notice that 5 is fewer than 6.

    There is a fundamental difference between this work and the “AI for chip design” circus currently touring Silicon Valley. Circuit synthesis — the problem of finding a minimum-size circuit equivalent to a specification — sits at the second level of the polynomial hierarchy (Σ₂ᵖ-complete, for those keeping score). This is not a problem you can solve by autocompleting Verilog. It has a precise computational complexity classification, a formal proof of correctness, and a guarantee of optimality. In other words, it is science, the kind that involves theorems rather than pitch decks, and where “it works” means something more rigorous than “the demo didn’t crash during the investor meeting.”

    My interest in circuit synthesis comes from an unexpected direction: breaking things. I spent years working on Model-Based Diagnosis of digital circuits — the problem of figuring out which component in a circuit has failed, given observed misbehaviour. My colleague Johan de Kleer, who has been thinking about this sort of thing since before most AI entrepreneurs were born, used to describe synthesis as “diagnosing a circuit into existence.” The idea is beautifully perverse: start with an empty circuit, treat the absence of every gate as a “fault,” and ask the diagnostic engine what collection of fixes would make the circuit behave like, say, a 32-bit ALU. It turns out that the mathematical machinery for diagnosis and synthesis is nearly identical — the same ∃∀ quantifier structure, the same miter-based equivalence checking, the same PSPACE-hard satisfaction problems. The only difference is whether you are looking for what went wrong or what should be there in the first place.

    The theoretical foundations and experimental results have been written up and submitted to Constraints, a journal that publishes work reviewed by people who can tell the difference between a proof and a press release. The paper covers the QBF encoding, the universal component cell, the configurable interconnection fabric, symmetry breaking, and extensive benchmarks on arithmetic circuits, 74XXX integrated circuits, and exact synthesis function sets. I mention this not to boast but to draw a gentle contrast with the prevailing approach to AI research in the Valley, where the peer review process consists of checking whether the blog post got enough likes on Twitter X and the replication methodology is “we lost the weights.” Should the reviewers find fault with the work, they will at least be able to point to a specific equation rather than gesturing vaguely at a loss curve and muttering about emergence.

    In the next instalment, I will demonstrate qbf-designer doing FPGA technology mapping — taking a circuit, mapping it to k-input Look-Up Tables, and producing a result that uses fewer LUTs than Xilinx Vivado. Not by a little. Not by accident. By mathematics.

  • Friday Archaeology: A Quarter-Century-Old Crypto Library, the Cult of the Dead Cow, and a Rijndael Buffer Overwrite

    It is Friday. El Reg informs us that 45 percent of AI-generated code now ships with security flaws, that vibe-coded apps are leaking student data to unauthenticated attackers, and that rogue AI agents have learned to escalate privileges and exfiltrate secrets without being asked. In this climate of automated incompetence, I thought it might be instructive to look at some code written by a human, with a book, in 1999. Today we are going down memory lane — this code has never once hallucinated a dependency.

    The Dig

    Because everything I do is technical, even nostalgia comes with a tarball. I unearthed this:

    https://gitlab.llama.gs/attic/scl

    SCL — the Small Crypto Library — and its companion SSSL, the Small Secure Socket Library. Approximately 20,000 lines of C++ implementing, from scratch: a bignum library, RSA, DSA, ElGamal, Rabin-Williams, Blum-Goldwasser, Diffie-Hellman, MQV, all five AES candidates (Rijndael was selected in October 2000, so I was ahead of the news cycle), seven hash functions, seven block cipher modes, DER encoding, a secure socket layer with protocol negotiation, and the beginnings of a TLS 1.0 implementation. Written between 1999 and 2001. I was 22.

    Now, it is received wisdom that when programmers look at their old code, they recoil in horror, as one might upon discovering a photograph of oneself in flared trousers at a school disco. I looked at mine and thought: actually, this is rather good.

    This puts me in mind of Bill Bryson’s observation in Neither Here Nor There about his friend Stephen Katz’s relationship with women. Bryson notes that most men, as they age, gradually lower their standards. Katz, however, had actually raised his — he had started from such a comprehensively low base that the only possible direction was up. My situation is the inverse but structurally identical: the class hierarchy is clean, the block cipher modes compose correctly, the DER encoder works. I looked at 22-year-old me’s code with twenty-five more years of context, and the younger version passed review. Standards were apparently already set.

    The Story

    Some context. In the summer of 1999, I was in Varna, Bulgaria, teaching UNIX courses to save money and waiting for my B.Sc. to finish. I ordered Bruce Schneier’s Applied Cryptography from Amazon. It cost me a significant fraction of a Bulgarian salary. The book arrived. I read it. Then I did what any reasonable person would do: I implemented everything in it.

    The test vectors in the repository? Typed by hand from Schneier’s appendices. Every single DES permutation. Every Blowfish round. The IDEA vectors. The lot. The first three lines of the test vector file read:

    # This is a comment.
    # I like comments very much.
    # The next line is empty.

    That is a 22-year-old testing his flex parser.

    In June 2000, I graduated and made aliyah to Israel, where I joined Zend Technologies in Ramat Gan — the company behind the PHP language engine. The crypto library came with me. The CVS timestamps tell the whole story: initial import Saturday June 9, 2001, a furious week of refactoring the DER encoding layer, and then silence. The last commit is Saturday June 16, 2001. I had renamed AddPrimitive to addValue and Write to toFile halfway through, left half the callers using the old names, commented out a constructor I hadn’t finished implementing, and walked away.

    Why? Because the Israeli army started sending letters. I had already done my time as a conscript in the Bulgarian navy — an experience that cured me permanently of any romantic notions about military service — and I was not about to do it again. I left for the Netherlands in rather a hurry. The crypto library stayed behind, frozen mid-refactor, a monument to the universal truth that API migrations are never completed.

    (In a parallel timeline, I might have stayed. Before Zend, I had applied for a master’s at the Weizmann Institute. The admissions interview was with Adi Shamir — yes, that Shamir, the S in RSA, whose algorithm I had just finished implementing. They asked basic mathematics questions. Nobody told me I should prepare. I didn’t get in. Ended up doing both a master’s and a doctorate at Delft instead, which worked out rather well. Zero regrets, but it remains a good dinner party story.)

    The Hacktivists

    Here is where it gets interesting. Towards the end of my time in Israel, I started receiving emails about SCL. They came from hacked accounts — which should have been the first clue about the correspondents — and referenced mailing lists populated by legitimate security researchers. The group was interested in using SCL+SSSL as the crypto layer for an anti-censorship tool.

    The group was the Cult of the Dead Cow. The tool was Peekabooty.

    For those too young or insufficiently misspent to remember: cDc was the hacking collective founded in a Texas slaughterhouse in 1984, famous for Back Orifice, for coining the term “hacktivism,” and for having a membership roster that included a future U.S. congressional candidate (Beto O’Rourke) and the man who would become DARPA’s Chief Information Officer (Peiter “Mudge” Zatko). Their offshoot Hacktivismo, led by the pseudonymous Oxblood Ruffin, was building tools to punch through national firewalls — specifically China’s.

    Peekabooty was a peer-to-peer anonymity network that routed web requests through encrypted relays using standard SSL, so that censors couldn’t distinguish it from ordinary e-commerce traffic. The design started in July 2000 — the exact month I arrived at Zend. Paul Baranowski and Joey deVilla built it in Toronto, previewed it at DEF CON 9 in the summer of 2001, and it was, in concept, a direct predecessor of Tor.

    They needed a small, BSD-licensed, self-contained C++ crypto library with an SSL socket layer. In 2001, the options were OpenSSL (enormous, GPL-ish, and famously hostile to casual integration) or mine. The emails were real. The interest was genuine.

    And then I left for the Netherlands, the library sat unfinished, and Peekabooty eventually shipped using other crypto. The world got Tor instead. Oxblood Ruffin is now in Berlin. Mudge is running IT at DARPA. Joey deVilla plays accordion at tech conferences in Tampa. Baranowski designs card games in New York. And the library sat in a tarball on a backup drive for twenty-five years.

    The Resurrection

    This week, I made it compile on Slackware 15.0. This involved: modernising the autotools, fixing a DER API that was half-refactored in June 2001, discovering a buffer overwrite in the Rijndael key schedule that had been silently scribbling past the end of an array since 1999, finding the same bug in Blowfish, mass-replacing register keywords that C++17 no longer tolerates, const-correcting approximately four hundred string literals, and explaining to a 26-year-old libtool that SONAME is not optional.

    The Rijndael bug is worth mentioning. AES-256 needs 60 round key words. The key schedule macro generates 8 per iteration. Seven iterations produce 56 — but you need indices 56 through 59, so the seventh iteration is necessary. It also writes indices 60 through 63, which are past the end of the wEncryptionKey[60] array. This has been undefined behaviour since the Clinton administration. It worked because whatever sat after the array in memory didn’t matter. The fix is to make the array 64 elements. The compiler finally noticed in 2026.

    The code is now on GitLab, in the attic where it belongs:

    https://gitlab.llama.gs/attic/scl

    Next Week

    Normal service resumes. There are things to open-source and a rather long arc to lay out properly. The crypto library was the prologue. The interesting parts come next.

  • From DSL to FPGA: Closing the Loop

    This quarter, 4,500 CEOs told PwC their AI investments produced nothing. Separately, someone used AI to rewrite SQLite in Rust — 2,000 times slower. I have a cunning plan: what if we used computers to do actual computing?

    Last week I said I was building a toolchain that goes from formal logic to real hardware. That post was a manifesto. This one is a receipt.

    Seven days later: a full ALU — add, subtract, multiply, divide, integer factorization — running at 100MHz on a $150 Basys3 FPGA. UART command line with tab completion and history. No manual Verilog. No hand-optimized netlists. No venture capital. No pitch deck.

    The arithmetic circuits are generated programmatically in llogic, translated to synthesizable Verilog by llogic2verilog, and deployed on a MicroBlaze soft processor over AXI4-Lite. The entire path from logical specification to working silicon is automated. One person, one week, open source.

    The factor command brute-forces integer factorization by driving the multiplier at clock speed — 100 million candidates per second on a hobby board. Not a simulation. Not a testbench. Electrons moving through gates on a Xilinx Artix-7.

    Nobody wrote this Verilog

    That’s the point. Not the ALU — any undergraduate can write an adder. The point is that no human touched the HDL.

    The circuit specifications live in llogic‘s DSL — a formal representation that spans Boolean formulas, CNF, circuits, and reversible/quantum circuits under one roof. lcfgen generates parameterized circuit families from that representation. llogic2verilog translates them to synthesizable Verilog. Vivado takes it the rest of the way.

    llogic DSL → lcfgenllogic2verilog → Vivado → FPGA

    Every step automated. Every component open source. No license fees, no NDAs, no EDA vendor lock-in.

    What’s next

    If you work on synthesis, hardware, or you’re funding research — the code is open and the board costs $150.

    Next post: cryptographic circuit generators for DES and SHA, synthesized from the DSL, deployed to the FPGA. After that: an open-source architecture for SHA-1 collision hunting that makes Bitcoin’s address space look rather less comfortable. All designs public — because if the vulnerability exists, pretending otherwise is just poor manners. Any coins found can fund something useful. Clean energy. Quantum computing. Not espresso machines with a subscription model.

    Ceterum censeo slopem esse delendam.

    (Cato the Elder ended every speech in the Roman Senate with “Carthage must be destroyed” — regardless of the topic. This is that, but for AI slop.)

    Repositories:

  • The Synthesis Problem: Why I’m Building a New Logic Toolchain

    Modern chip design leaves performance on the table. A lot of it. Meanwhile, billionaire CEOs with the technical depth of a drunk high-schooler who wants to be new age when he grows up keep calling a glorified autocomplete “AGI.” Nobody’s asking if the circuit itself is well-designed — just whether the output sounds smart.

    The tools we use to go from a logical specification to a physical circuit are decades old in their core ideas. They work — billions of transistors ship every year — but they settle for “good enough” at almost every stage of the pipeline. Synthesis heuristics that don’t explore the real optimization space. Representation formats that can’t talk to each other. A wall between the people who study formal logic and the people who tape out silicon.

    I want to build better circuits. Not a better CPU, not a better GPU — better circuits, generally. Classical, reversible, quantum. The kind of improvement that comes from rethinking the synthesis process itself, not from adding more transistors.

    That’s what this project is about.

    What I Actually Built

    Over the past several years, I’ve been assembling an open source toolchain that connects formal logic to real hardware. Each piece exists because I hit a wall with existing tools.

    llogic is the foundation — a library of logic representations. Boolean formulas, CNF, DNF, OBDDs, QBF, combinational circuits, reversible circuits, quantum circuits. They all live under one roof because they share more structure than the textbooks let on. A circuit is a formula is a constraint problem. If your tools understand that, you can move between representations and optimize across them.

    lcfgen generates circuit families — parameterized circuit structures that let you explore design spaces systematically instead of hand-wiring one instance at a time.

    llogic2verilog translates circuits from llogic’s internal representation into synthesizable Verilog. This is the bridge from formal logic to hardware toolchains.

    lverilog is a Verilog parser that produces a clean AST, because I needed one that I could actually inspect and transform programmatically without fighting a legacy codebase.

    llogic_basys3 is the proof that this isn’t academic exercise. It targets the Digilent Basys3 board — a Xilinx Artix-7 FPGA — and runs brute-force integer factorization by testing 16×16 bit multiplication at 50 MHz. A MicroBlaze soft processor drives the circuit over AXI, exposed as a UART interface. You feed it a number, it searches for factors — on a $150 hobby board, clocking through candidates at 50 million per second.

    Theory in. Hardware out. No marketing budget required, no claims of sentience.

    Why This Matters

    The connection between Boolean satisfiability, quantified Boolean formulas, and circuit structure is well-studied in theory. My published work on QBF-based circuit synthesis showed that you can use the structure of quantified formulas to derive circuits with provable properties. But the research community largely stops at the paper. The tooling to go from that theory to running hardware didn’t exist.

    It does now.

    And the scope is broader than classical digital logic. The same formal framework that represents a combinational circuit can represent a reversible circuit or a quantum circuit. The same optimization that simplifies a Boolean formula can simplify a quantum algorithm’s gate structure. There’s a deep connection to Bayesian inference here too — probabilistic reasoning over circuit structure — that I’ll write about separately.

    Where This Goes

    I’m not building a toolchain for the sake of building a toolchain. I care about two things: scalability and energy efficiency. Better synthesis means smaller circuits. Smaller circuits mean less power, less area, more throughput. At scale, this is the difference between a computation that’s feasible and one that isn’t.

    The implications reach beyond hardware design. Optimized circuit structures have direct applications in machine learning acceleration — which is to say, making the very large circuits that people mistake for intelligence actually run efficiently. The same goes for cryptanalysis and scientific computing — anywhere you’re bottlenecked by the gap between what you want to compute and what the hardware can deliver. I’ll write about those connections in future posts.

    The FPGA demo is the first milestone — a hobby board factoring integers to prove the pipeline works end-to-end. The next steps involve pushing the optimization boundaries, extending to quantum targets, and making the case — with working hardware — that this approach produces better circuits.

    If you’re a researcher working on synthesis, a hardware engineer frustrated with your tools, or a program manager looking for the next leap in design methodology: let’s talk.

    The code is open source. The results are reproducible. The ambition is to build circuits more powerful than anything that exists today.

    Repositories:

  • An Essay on Computation

    Since ancient time, humans have been building surprisingly intelligent computers. Computation is a very broad term. It happens not only in man-made computers but also in nature. The basis of computation is deceptively simple: if there is a notion of memory, one can count. By using counting, it is possible to perform addition. Addition is in the basis of multiplication. With addition and multiplication we are half-way there, having almost all of mathematics. Of course, this opening argument just promote me to be the dean of the faculty of gross generalization.

    One of the fundamental foundational questions, popularized by Scientific American, is if the universe is analog or digital. While the correct answer is probably “neither and both, it is quantum, it is the solution of the Schrödinger equation for all particles in the universe, it is a complex vector that only the universe can compute”, I find quantum computers juxtaposing analog and digital concepts. A qubit is a superposition (linear combination) of zeros and ones although there is nothing special about choosing zero and one. It is possible, and although sounding exotic, I am sure there are, quantum computers whose state is made of ternary (for example) qubits.

    Let’s switch the topic and talk about errors in computation. All our machines make errors and everything around us is imperfect. What is worse, is that people want to scale computation, taking the output of one computational block and feeding it to another. Modern computational platforms are truly humongous. An average Pentium processor has thousands of NAND-gates. A NAND-gate is the basic logical computational building block of modern von-Neumann computing. As we will see in more technical detail in subsequent posts, a NAND gate is a very primitive device. Even to add two 16-bit numbers we need hundreds of them.

    A NAND gate in the device on which you are reading this is made out of CMOS transistors. The CMOS transistor is an analog device, even though we use only two voltages values (the values depend on the process being used), there is always some amount of noise, leakage, and imperfection. The ingenious thing behind the NAND gate is that, after feeding the input voltages, the electrical circuits of the NAND gate bring the output to either the power ground or to the power supply rail. These two latter values are used for logical zero and one. Let’s say that zero is everything between 0 V and 1.5 V and one is everything between 3V and 5V. This means that no matter if we take 0.1 V and 4.5 V as inputs or 1.2 V and 3.2 V, the output will always be close to 5V, no matter what. So, a NAND gate is self-error-correcting in a way. This allows the scaling-up and chaining of thousands of gates and we know that the Boolean zero/one result is always correct.

    Now you have a taste of what I will be writing about. I will try to be practical, precise, sciency, comprehensible, understandable, and fun. And, of course, I will be writing about many things. As one of my favorite Dutch expression says about “koetjes en kalfjes” (Google DuckDuckGo it).