## Third Party Bit Commitment

In this post: using a trusted third party to make bit commitment perfectly secure.

### Motivation

I wrote my master’s thesis on the subject of secret sharing, where a dealer splits some secret message into shares. Later on, players use some fraction of the shares to recover the secret. One of the tweaks to the problem that I addressed was “What if the players, who want to backstab each other, have infinite compute power? Can it still be done fairly?”.

Giving the players unbounded compute power makes things tricky because a lot of cryptographic primitives rely on computational hardness for their security. Bit commitment is an example of such a primitive, and is important in secret sharing protocols for keeping players honest.

The bad news is that there is no traditional bit commitment scheme, where a sender commits to a message they will later give a receiver, that is not based on computational hardness. Not even for quantum senders and receivers. The good news is that the secret sharing use case is *slightly* different because our protocol starts with a trusted third party (the dealer) creating the shares.

Having a dealer introduces the ability to give the receiver data that the sender does not know. The dealer can add some redundancy to the messages, and tell the receiver how the redundancy should relate to the message. As long as changing the message scrambles that relation, the sender can’t change the message without risking detection. As long as the relation doesn’t reveal the message, the receiver can’t predict the message. It’s just a matter of doing it in a nice way.

### Shapes in Space

When I started on this bit-commitment-via-redundancy idea I was thinking in terms of parity bits and checksums, but soon I settled on the idea of *putting a point on a line*. I would encode the message into one of the components making up a line, and give the receiver a random point on the line.

The idea is that a single point is not enough to recover a line (many possible lines pass through any given point), and this prevents the receiver from recovering the message. Also, because lines only intersect at one point (or zero when parallel), a sender changing the message will fail to maintain all but one of the many possible verification points. If the sender guesses wrong when choosing which point to keep, they get caught.

Note that we can’t use “normal” lines (Euclidean space, real coordinates), because they would leak information to the receiver. For example, suppose there’s a maximum slope of 1000, a maximum y-intercept of 1000, and the receiver gets the point (2, 3000). There’s only one way for that to happen, so the receiver will learn the message without any help from the sender. If we don’t put maximum values on the slope or y-intercept, and instead make larger values less and less likely, then the receiver won’t *know* but they’ll still get a pretty good idea. There will be a loss of information entropy. What we want to use instead are lines over finite fields (e.g. modular arithmetic with a prime modulus).

Finite fields can take some getting used to, but they’re surprisingly similar to the field of real numbers. Non-parallel lines still intersect at exactly one point, parallel lines still intersect at zero points, and any two points on a line still uniquely determine that line. To give you an idea of how they work, I’ve made an animation showing how the line between two points changes as one of the points is moved:

Notice how, in the above animation, the line loops from top to bottom and from right to left. Lines in finite fields are loopier. Also notice how points always fall on an intersection, corresponding to a pair of integers . The space is discrete, not continuous. Although I’ve drawn the lines as continuous, that’s just to make them recognizable as lines instead of a confusing sequence of discrete points.

Getting back to bit commitment, the above diagram shows the difficulties a sender will have when changing the message. Every time the verification point (the blue one) moves, a different slope is needed to hit it.

Let’s visualize why the receiver is unable to recover the message. The following animation shows that many possible lines (one per possible slope) pass through any single verification point:

If you keep track of what y-intercepts show up as the slope is incremented in the above animation, you can see that each y-intercept is also used exactly once. Here’s the same animation, but re-ordered so the y-intercept is incrementing instead of the slope incrementing:

The verification point defines a one-to-one relation between the y-intercepts and the slopes. The trick, that keeps the sender honest, is that each verification point defines a *different* relationship. The sender doesn’t know which one to maintain.

### Symbolic

My initial idea for how a message should be encoded into a line was to make the y-intercept equal to the message and the slope a random number. This works, but has a minor flaw: the query point can’t be placed on because then the verifier would know the message. This makes it impossible to work modulo 2, which would be convenient considering that computers work in binary. Later I realized I’d just had the roles backwards. The message should determine the *slope*, and then you randomize the y-intercept.

Here’s the process the dealer uses to commit a sender to sending a specific message to a receiver:

- We work in a finite field large enough to encode the message.
- Let be the message, encoded into .
- Pick a random y-intercept from
- Pick a random query location from .
- Let be the query answer .
- Give to sender.
- Give to receiver.

When the time comes, the sender will transmit and to the verifier. The verifier will check that , rejecting the message if the equality does not hold. Senders who choose to forge a message have a chance of succeeding. This can be made arbitrarily small by picking a large enough finite field.

Let’s run through a quick example. Suppose we choose to be . The finite field is like the integers modulo 2, but twice. It has four elements (, , , and ), each corresponding to two bits. Addition is done by bitwise xor-ing (e.g. ), and multiplication by and-ing.

Suppose our message is . The dealer picks a random y-intercept and a random query location . The dealer computes . The sender gets and the receiver gets .

If the sender tries to change the message to and sticks with the same y-intercept, the receiver will check if and find that , detecting the forgery. The “correct” y-intercept to fool the receiver would have been , but if the verification location had been then keeping the y-intercept the same would have been the right move.

### Rediscovery

I was really happy to have stumbled onto this third party bit commitment idea. Crypto is usually terribly complicated, yet we ended up with something so wonderfully *simple*. It could reasonably have been a tiny paper of its own, separate from my thesis.

Only… there’s this thing that always happens when I come up with ideas. A little searching confirmed it: already discovered.

In 1999, Ron Rivest of RSA fame published Unconditionally secure commitment and oblivious transfer schemes using private channels and a trusted initializer. It describes exactly the system I had found.

At least this time it wasn’t known before I was even born.

### Summary

Adding a trusted dealer makes bit commitment simple.

Normally bit commitment’s security requires computational hardness assumptions, but the third party allows information-theoretic security.

Ron Rivest is smarter than me.

—

### Discuss on Reddit

—

### My Twitter: @CraigGidney

—

Twisted Oak Studios offers consulting and development on high-tech interactive projects. Check out our portfolio, or Give us a shout if you have anything you think some really rad engineers should help you with.

## Archive

- strilanc
- What Quantum Computers Do Faster, with Caveats
- Naming Things: Fail-Useful
- Detecting Simple Cycles Forming, Faster
- Third Party Bit Commitment
- Angular Velocity is Simple
- Collection Equality is Hard
- Deadlocks in Practice: Don’t Hold Locks While Notifying
- Brute Force Parallelization
- A Year’s Worth of Opinions about Objective-C
- Referencing Substrings Faster, without Leaking Memory
- Not Crying Over Old Code
- Exploring Universal Ternary Gates
- Impractical Experiments #2: Securing Peer to Peer Fog of War against Map Hacks
- Achieving Exponential Slowdown by Enumerating Twice
- Using Immortality to Kill Accidental Callback Cycles
- Cancellation Tokens (and Collapsing Futures) for Objective-C
- Visualizing the Eigenvectors of a Rotation
- Collapsing Futures in Objective-C
- Bug Hunting #1: Garbled Audio from End to End
- Impractical Experiments #1: Representing Numbers as Polynomials
- Implementing Quantum Pseudo-Telepathy
- Turn On Your Damn Warnings
- Big-O Made Trivial
- Unfathomable Bugs #7: The Broken Oven
- Solomonoff’s Mad Scientist
- Yearly Blogging Roundup #1
- What isn’t a Monad
- Searching a Sorted Matrix Faster
- How to Read Nested Ternary Operators
- Making Sublime Text 2 Jump to the Correct Line with Unity on OS X
- My Bug, My Bad #4: Reading Concurrently
- Whole API Testing with Reflection
- Optimizing a Parser Combinator into a memcpy
- Don’t Treat Paths Like Strings
- Breaking a Toy Hash Function
- Counting Iterators Lazily
- Unfathomable Bugs #6: Pretend Precision
- My Bug, My Bad #3: Accidentally Attacking WarCraft 3
- Collapsing Types vs Monads (followup)
- Collapsing Futures: Easy to Use, Hard to Represent
- Eventual Exceptions vs Programming in a Minimal Functional Style
- The Mystery of Flunf
- Explain it like I’m Five: The Socialist Millionaire Problem and Secure Multi-Party Computation
- Computer Science Blows My Mind
- A visit to Execution Labs in Montréal
- Transmuting Dice, Conserving Entropy
- Rule of Thumb: Ask for the Clock
- Rule of Thumb: Use Purposefully Weakened Methods
- Rule of thumb: Preconditions Should be Checked Explicitly
- Intersecting Linked Lists Faster
- Mouse Path Smoothing for Jack Lumber
- My Bug, My Bad #2: Sunk by Float
- Repeat Yourself Differently
- Grover’s Quantum Search Algorithm
- Followup to Non-Nullable Types vs C#
- Optimizing Just in Time with Expression Trees
- When One-Way Latency Doesn’t Matter
- Determining exactly if/when/where a moving line intersected a moving point
- Emulating Actors in C# with Async/Await
- Making an immutable queue with guaranteed constant time operations
- Improving Checked Exceptions
- Perishable Collections: The Benefits of Removal-by-Lifetime
- Decoupling shared control
- Decoupling inlined UI code
- Linq to Collections: Beyond IEnumerable<T>
- Publish your .Net library as a NuGet package
- When null is not enough: an option type for C#
- Unfathomable Bugs #5: Readonly or not
- Minkowski sums: examples
- My Bug, My Bad #1: Fractal Spheres
- Working around the brittle UI Virtualization in Windows 8
- Encapsulating Angles
- Unfathomable Bugs #4: Keys that aren’t
- How would I even use a monad (in C#)?
- Useful/Interesting Methods #1: Observable.WhenEach
- Unfathomable Bugs #3: Stringing you along
- Anonymous Implementation Classes – A Design Pattern for C#
- Tasks for ActionScript 3 – Improving on Event-Driven Programming
- Minkowski sums and differences
- Non-Nullable Types vs C#: Fixing the Billion Dollar Mistake
- Unfathomable Bugs #2: Slashing Out
- Script templates and base classes
- Unity font extraction
- Abusing “Phantom Types” to Encode List Lengths Into Their Type
- Constructive Criticism of the Reactive Extensions API
- Quaternions part 3
- Quaternions part 2
- Quaternions part 1
- Unfathomable Bugs #1: You can have things! You can have things IN things! You can have …
- Coroutines – More than you want to know
- Asset Bundle Helper
- The Visual Studio goes away
- .Net’s time traveling StopWatch
- Introducing Catalyst