Tuesday, November 12, 2019

Bitcoin’s maximum supply is now well below 21 million


In October 2019, the Blockchain Bitcoin surpassed the 18 million BTC milestone. In fact, there are only 3 million Bitcoins left that can still be mined in the future. However, the maximum number of Bitcoins that can ever be usable is now closer to 17 million than the 21 million to be created that was initially fixed by Satoshi Nakamoto. According to a study by Chainalysis, which was published at the end of 2017 and relayed at the time by Fortune.com, nearly 3.79 million Bitcoins had already been lost. However, there are still 120 years to go before all Bitcoins have been officially mined. This should take us to 2140.

But if the coins are lost at this rate, would it become a collector's item? or will the rarity of the coins boost it price and still be or practical use?

https://medium.com/altcoin-magazine/bitcoins-maximum-supply-is-now-well-below-21-million-33023a576bf1
https://www.unchained-capital.com/blog/geology-of-lost-coins/

Using your ECG tests, AI can predict if you will die within a year


Researchers from Geisinger Health System in Pennsylvania analyzed the results of 1.77 million ECGs and other records from almost 400,000 patients. The team used this data to compare machine learning-based models that either directly analyzed the raw ECG signals or relied on aggregated human-derived measures (standard ECG features typically recorded by a cardiologist) and commonly diagnosed disease patterns. The neural network model that directly analyzed the ECG signals was found to be superior for predicting one-year risk of death. The neural network was able to accurately predict risk of death even in patients deemed by a physician to have a normal ECG.

Just a fun thought: While machines and AI algorithms are still benign, what will happen when machines become autonomous and superior to humans - will they "take action" to prove their superiority? It is scary if the result at stake is something like that above.

https://www.newscientist.com/article/2222907-ai-can-predict-if-youll-die-soon-but-weve-no-idea-how-it-works/
https://arxiv.org/ftp/arxiv/papers/1904/1904.07032.pdf

Human-Machine Singularity Series | Evolution of AI and it taking over our jobs!


There are four levels of complexity of Artificial Intelligence:

Instructive - Algorithms where rules are codified manually
This has been in existence since the advent of computers and (the machine – i.e. computer) represents a dumb interpreter of human intelligence. This stage saw siloed digitization of data and process with aim to increase speed especially for repeatable tasks and also reduce errors. It quickly evolved into also giving structure to the data and interoperability between organizations. Machines just did whatever humans did much faster, without fatigue and with minimal errors. What could be done by thousands of people could be done by few machines in fraction of time. And indeed, this sparked an exponential decline in human effort required to generate the same degree of value. This stage is level-0 of AI as machines are not thinking – they do whatever you code.

Supervised - Algorithms that codify rules through guided learning
This is where AI has become main-stream today. The outcome is dependent on the past input-output sets (training data) which is not explicitly coded in the solution. The algorithms finetune their input-output logic using past data and are able to predict future outcomes using this “learnt” input-output logic.

Unsupervised - Algorithms that codify rules through exploration
Today, this is beginning to get into main-stream. The algorithm learns from the past input data and figure outs the inferences or outputs from that data, thus creating the input-output logic.

Generalized - Algorithms that codify rules which adapt to changes in environment
This is the cutting edge of artificial intelligence research. It includes the general-artificial-intelligence algorithms. The algorithm learns on the way – just like a child does – and figures out the best input-output logic. It is just few steps away from enabling Human-Machine Singularity where algorithm will continue to refine themselves to become the better of the two races. Who would then need humans! This will have an exponential impact on human employability.


Saturday, November 2, 2019

Rise of AI and impact on architects


90% of architects will lose their jobs as artificial intelligence takes over the design process, according to designer Sebastian Errazuriz (link below). Wallgren Arkitekter and BOX Bygg have created a tool that generates adaptive plans.



I do not disagree with the assessment. The profession combines art with optimization problems. Consider the task of desigining a house. Give it to AI - it can search the internet for retro or modern ideas, analyze the common design patterns that is in vogue, implement design features that complement climatic conditions, surrounding landscape and ofcourse maximize use of limited land - say between greens and concrete. While mathematics is solving for optimization, AI + Internet add the "art" in the whole process.

Reference:
Rise of artificial intelligence means architects are "doomed" says Sebastian Errazuriz
https://www.dezeen.com/2019/10/22/artificial-intelligence-ai-architects-jobs-sebastian-errazuriz/

Gartner's AI Hype Cycle update for 2019 - signficiant step towards becoming mainstream


In Sep 2019, Gartner released an update to its AI hype cycle. Between 2018 and 2019, organizations that have deployed artificial intelligence (AI) grew from 4% to 14%, according to Gartner’s 2019 CIO Agenda survey.

AI is reaching organizations in many different ways compared with a few years ago, when there was no alternative to building your own solutions with machine learning (ML). AutoML and intelligent applications have the greatest momentum, while other approaches are also popular — namely, AI platform as a service or AI cloud services.

Conversational AI remains at the top of corporate agendas spurred by the worldwide success of Amazon Alexa, Google Assistant and others.


DeepMind's AI achieves grandmaster level in StartCraft II game


DeepMind (a Google subsidiary) has designed an AI system, called AlphaStar that now outranks the vast majority of active StarCraft II players, demonstrating a much more robust and repeatable ability to strategize on the fly than before. This was quite a feat. StarCaft II is highly complex, with 10 to the power of 26 choices for every move. It’s also a game of imperfect information - and there are no definitive strategies for winning.

The achievement marked a new level of machine intelligence. AlphaStar used reinforcement learning, where an algorithm learns through trial and error, to master playing with all the games. The AI reached a rank above 99.8% of the active players in the official online league. DeepMind team modified a commonly used technique known as self-play, in which a reinforcement-learning algorithm plays against itself to learn faster. DeepMind famously used this technique to train AlphaGo Zero, the program that taught itself without any human input to beat the best players in the ancient game of Go.

The results have ben pubished in Nature on Oct 30, 2019 at https://www.nature.com/articles/s41586-019-1724-z