“Reviewing the material about mining on
slides 20 and 21, the last step refers to…
the hash being reviewed against the desired pattern
in order to arrive at the prize, the block reward.”
“How is this ‘desired pattern’ defined and
generated in the decentralized platform?”
“Is the desired pattern somehow
centralized and broadcast to each node?”
“What are the inputs to build this desired pattern?
[Does the pattern] change with each new block?”
Great questions. This is an area many of
our students find confusing, to say the least.
It is the topic of mining, which is not that easy
to understand. Let’s get our terms correct first.
The desired pattern is called the ‘target.’
The target defines the difficulty [for the network].
Through the proof-of-work algorithm, if a miner
achieves a result that is less than the target,
they are eligible to receive [the block reward], if the
information in the block and transactions are valid.
[They are] validated through the
consensus rules by [every other node].
How is the target defined and how does it change?
This is a great question and an
area of confusion [to students].
The target is a number [that must]
be greater than the hash of the block.
It is simply a ‘greater than, less than’ operator being
used to compare [hashes] against the desired pattern.
The miners are mining by hashing the header of each
block. The hash they are producing, which looks like…
a long string of hexadecimal digits,
is essentially just a number.
If you think of the hash as a number,
then the target is another number.
The hash of a block [must] be less than the target.
One way I like to [illustrate] this: the target is like limbo,
where you have to dance and pass underneath this bar.
The lower the bar gets, the harder it is to
pass underneath it for each limbo dancer.
If the target is lowered, it is actually harder to
find a number that is smaller than that target.
Every time the target gets lower, the difficulty becomes
greater, because it is harder to find a number that fits.
That is the process by which the [difficulty]
target is compared to the block hash.
The target is a number that defines the difficulty
of the proof-of-work mining algorithm.
If you [look at] the target, what you notice immediately
is that the first few digits are zeroes.
While the number started very high back in 2009,
when Satoshi Nakamoto mined the first block,
that number has now become billions of times smaller,
making the calculation billions of times more difficult.
As it becomes a smaller number, that means
[more] leading digits of that number are zeroes.
For example, let’s [think of] a big number.
What is smaller than one million?
Nine hundred and ninety-nine thousand,
nine hundred and ninety-nine is smaller,
That can be written as ‘0999999.’
What is smaller than ‘0999999’?
One thousand is smaller, written as ‘0001000.’ As we
go down, there are [more] zeroes at the beginning.
Finding a number even smaller than that target
[becomes] more difficult, the smaller the target.
The hashing process that miners conduct is random;
they use a random number [generator to] produce
a hash, and you can’t predict what it will be.
How do you know if it is smaller? You can’t predict
whether it [will be] smaller than the target.
In order to find a number that is smaller than the target,
you [must] just keep trying again and again,
pulling out random numbers from the cryptographic
hash function, until one of them — by sheer chance —
is smaller than the [difficulty] target.
The lower the target, the more hashing you [must]
do before you can find one smaller than the target.
That is the process by which the [block] reward
is allocated, through the proof-of-work algorithm.
Going back to Roberta’s question, “Is the desired pattern
somehow centralized and broadcast to each node?”
No. Each node independently calculates
what the target should be and adjusts it.
It started with a specific number that was
hard-coded [with the genesis block] in January 2009,
Since then, every 2016 or approximately two weeks,
we have a “re-targeting” as it is called.
Every 2016th block exactly, every node in the network
calculates a new target for the [next 2016 blocks].
They [look at the latest] 2015 blocks,
[see] that the next block will be 2016, and [know
they] have completed [another] re-targeting period.
[They] independently re-calculate what the target should
be for [the next 2016 blocks], before 2016 is mined.
What should that be? Let’s look at the previous 2016
blocks and see how long those [took] to be mined.
It should take 20,160 minutes [total]
because [issuance] is ten minutes per block.
If we count how long it actually took
to mine the previous 2016 blocks,
and we find that it [took] less than 20,160 minutes,
we were [mining] blocks faster than we should [have].
The difficulty was not [great enough]. It was too easy.
The target [should] be lowered proportionately,
in order to make [the difficulty greater].
If it [took] longer than 20,160 minutes for [mining]
2016 blocks, that means it was too difficult.
We [were] finding blocks too slowly, and so
the target is [increased] to make it easier.
Again, that is done proportionately.
The formula is to proportionately adjust the target up or
down by a ratio of how long it took to mine 2016 blocks,
based on how long it should take to find 2016 blocks,
which is 20,160 minutes [in about a two-week period].
That proportionate adjustment is the same for every
node, even though they are not coordinating.
They can all count how long it took to [mine] the
previous 2016 blocks, [which will be] the same number…
across all of the nodes, because they count
by looking at the times in the block headers.
They can also divide that number by 20,160 minutes,
and they will arrive at the same exact result.
If they multiply the target by that proportion,
they will have calculated a new target.
All of the nodes in the network, having calculated
the same inputs with the same equation,
will arrive at the same conclusion.
They will independently figure out what the target
should be for the next block in the series;
then 2016 blocks later, they will do it again, with the
same inputs in the same equation [for re-targeting].
Even though there is no synchronisation, they are using
the same inputs and all arrive at the same conclusion.
That comes the consensus [difficulty] target.
Even if a node is lying and says they
found a block [with a different target],
since all nodes know what the target should be for this
[block] period, they will all check [blocks] against [it].
They will only accept a block if it has been mined to that
specification, with the block hash is less than the target.
To answer the second question, “What are
the inputs to build this desired pattern?”
The number of minutes to mine the previous 2016
blocks, divided by the expected number of minutes.
The next question was, “Is there a new pattern
created whenever a new block is [mined]?”
“Does it change every block?” No, it changes every
re-targeting period: 2016 blocks, [or about] two weeks.
“Is 2016 blocks still an optimal adjustment [period]
considering volatility, or should it be more frequent?”
This is an ongoing debate which a lot of developers
in the Bitcoin community have from time to time.
There have been many suggestions for changing the
difficulty re-targeting algorithm, to make it more nimble.
There are disadvantages to making it more frequent;
there can be a sort of whiplash effect…
where short-term fluctuations affect the difficulty,
causing more short-term fluctuations…
which can actually increase volatility.
By doing [a re-targeting] every two weeks, that results
in reducing volatility by acting as a damper.
Some developers have suggested more sophisticated
algorithms than simply a moving average.
For example, using a proportional-[integral]-derivative,
or PID controller, a feedback mechanism…
with a different window for a moving average;
a bit like how cruise control works in your car.
There are advantages and disadvantages to every
proposal. None of them have progressed [so far].
Keep in mind, that would require a hard fork and
changing [a lot] of software in the ecosystem,
and massive coordination to remain in consensus.
It might be considered if, together with other changes,
there was a change in the format of the block header.
[People would want] to do a big upgrade for a hard fork.
Some of the recommendations for hard fork planning
include changes [to the] difficulty adjustment algorithm.
“What happens when the [hash rate] drops lower,
it makes no financial sense, and [miners] drop out?”
When the difficulty changes or profitability changes,
it doesn’t affect all miners to the same [degree].
There are thousands of miners out there, operating
with a fairly broad variety of hashing equipment…
electricity prices, labor costs, utility costs, real-estate
costs, etc. all of which determine their profitability.
[It is] a range. [Some] miners operate on the very latest
ASICs, installed and managed in the most efficient way,
where real-estate is dirt cheap, electricity flows
almost freely, and labor costs are minimum.
Those miners will be wildly profitable at the
current difficulty, because they are not the average.
Meanwhile, on the other end of the scale, [some miners]
are operating with previous generation chips,
where real-estate, electricity and
labor costs are expensive, etc.
They will not be profitable. Average profitability
[or net zero] is obviously between those two.
If average profitability changes, that is a moving bar.
More miners will fall below the threshold where it becomes profitable.
The least profitable [among] the miners will
abandon the field, and be replaced by miners with…
more efficient equipment and [better] locations.
“Eventually, everyone drops out,” is not [what] happens.
If more [miners] drop out, difficulty goes down.
When difficulty goes down, it becomes more profitable
for people who [stay], so they don’t drop out [for long].
It is a self-adjusting process. The fewer
[miners], the easier [the difficulty] gets.
The more [miners with a lot of
hash power], the harder it gets.
There is always someone making a profit in this
environment, but not everyone makes a profit.