Bargue - 12th hour hand study

Submitted by Sam on 4 January, 2012 - 00:23

Bargue - Voltaire's Hand - pencil drawing

I completed this drawing in the twelfth hour of my thousand-hour 2012 art challenge. It is taken from Plate 1.13 from Bargue's drawing course, and it is the most intricate (and difficult) study I have done for a long time.

I have accidentally stretched the whole hand vertically, losing the delicate weight and grace of the original. Once again I have rushed into rendering before checking all the proportions match the guidance diagrams. I will probably have another attempt later to try to capture the proportions properly.

I used 2H, HB, 2B and 4B pencils, and spent around 4 hours on this drawing, which is roughly A5.

Why kitchen gardening will not change the world

Submitted by Sam on 18 December, 2011 - 21:34

This is a TED talk by Roger Doiron, founder of Kitchen Gardeners International. He opens the talk by saying:

"I have a subversive plot. It is so subversive in fact, that it has the potential to radically alter the balance of power, not only in our country, but in the entire world."

The talk periodically picks up on the globally transformative, power of the kitchen garden, with a cheery rhetoric that is deeply misleading. Urban and suburban small-scale gardening will not radically alter the balance of power, and it absolutely does not carry the capacity to change the entire world, as the speaker claims.

This project has relevance and utility to those privileged to live in a temperate climate, with enough time and land to pursue kitchen gardening. For these people, typically living in the "global north", personal food security is not a concern, and growing fruit and vegetables instead confers secondary social, aesthetic and localized environmental benefits.

“Subversively” growing fruit and vegetables valued at $2,000 a year does not suddenly liberate $2,000 worth of food and distribute it to those that need it most. The food that you would have bought from the store is not redistributed to the local inner-city homeless nor the malnourished of the global south. The global problems of food-shortage, food waste and food distribution remain.

Whilst the project has intrinsic value to the communities that can pursue it, it should not be presented as though it might impact the areas of the world where food security is a primary concern. This is an unavoidably America-centric talk, and it offers a model which has applicability to only a small proportion of the world's population, not least because the most acute problems associated with food-security are often in rural areas, where water scarcity, adverse climate and political instability drive food insecurity. The poor and the rural in countries from Guatemala to Tanzania cannot hope to grow their own kitchen gardens, and kitchen gardens in the global north will not feed them, and nor will they help stabilize volatilities in international commodity prices to which the poor are so vulnerable.

A stark illustration of the irrelevance of this project to the areas affected by food shortages can be seen by comparing the Oxfam global food crisis map, below, with the map of members of Kitchen Gardeners International.

Get Adobe Flash player

3D scanning and heritage preservation for the future

Submitted by Sam on 9 November, 2011 - 20:02

The most recent TED talk, Ben Kacyra's Ancient wonders captured in 3D demonstrates a 3D laser scanning system that rapidly generates 3D visualizations of real-world objects by bombarding them with beams of laser light, recording the angle of the beam and measuring the time the light takes to reflect back to the scanner, and compiling this data to into a 'point cloud' that maps a virtual 3D scene.

At the end of the talk, Kacyra reveals to the audience that they have been scanned whilst he has been speaking, and he shows them a 3D fly-through of the auditorium they're in. They have been “digitally preserved in about four minutes”.

This kind of digital preservation of the real-world can combine with video and still imagery to create highly-realistic models of perishable structures that can be used, if necessary, as the basis of unparalleled restoration and rebuilding projects. The 3D models can have huge educative value, and if the plans are shared freely, they can allow heritage sites, for instance, to be virtually explored and enjoyed from anywhere in the world, at any time in the future.

But today's laser scanning is “only” accurate to a few millimetres, and is only skin-deep. Whilst it is very well suited for large preservation projects like digitizing Mount Rushmore for a smartphone app, it is useless for penetrating within an object and recording its contents with microscopic resolution. If we could rapidly record and reconstruct with atomic-level detail, would the replica be any different from the original? Would we be able to scan and then reconstruct living organisms, bit by bit?

The ultimate trajectory of this kind of technology could very well lead to this staggering level of control. Eric Drexler sketched out one endpoint for 3D scanning technology in Engines of Creation in 1986, where he outlined how molecular disassemblers comprising of a system of nanomachines could deconstruct an object atom by atom and simultaneously record each atom's position for later reconstruction.

Our crude four-minute 3D models are a long way from this ultimate precision, but there are no physical laws that need to be broken to get there. It may just be a matter of time before any object, living or non-living, can be scanned, preserved and then replicated with perfect fidelity.

Day 100 - The Singularity

Submitted by Sam on 28 August, 2011 - 23:47

The impact of the trend for increasingly rapid technological advancement is set to profoundly change what it means to be human in a very short amount of time. Some futurologists see the exponential pace of advancement in information-based technologies as inexorably leading to a technological singularity, a period of technological growth so fast and so far-reaching that human life will be transformed forever. Following co-dependent, mutually reinforcing advancements in fields such as artificial intelligence, molecular nanotechnology and genetic engineering, the singularity would represent the intersection of our natural biological intelligence with our designed technologies, a merger that will dissolve the distinctions between biological and machine, the virtual and the real. After a singularity of this kind, perhaps the only identifiable trait shared between biological humanity and its non-biologically enhanced descendants will be the inherent drive towards self-improvement, the irresistible desire to extend and expand physical and mental reach, a quality unique to us amongst all living species.

The singularity may begin with the emergence of non-biological intelligence based on the architecture of the human brain. By default, such a being will be at least three million times faster at processing and transmitting signals than its biological model, simply because electrical transistors can switch at close to the speed of light whilst electrochemical signals in mammalian brains are limited to a communicative speed of just a hundred metres per second. A network of these machines could share knowledge and processing resources at extremely high speed, factorially faster than human-to-human knowledge-transmission through language. Linked to the internet, non-biological intelligence could digest and master all of the knowledge of the human-machine world to create entities with the knowledge, intelligence and raw processing power to iteratively self-re-engineer, improving their own abilities to think (and to self-improve again) at an exponentially faster rate.

If nanotechnology brings non-biological neural prosthesis into the biological human brain, the exponential doubling of processing power could rapidly lead to non-biological human intelligence dominating the fixed capacity of the biological portion, until only non-biological intelligence remains. Through a continuum of upgrades and replacements, the personality, memories, talents, hopes and dreams of the biological human would be preserved but enhanced into just a tiny fraction of an intelligence trillions upon trillions of times more powerful than its non-enhanced counterpart. Compared to biological humans, post-human subjective time would expand indefinitely, as thoughts travel millions of times faster, intelligence capacity is unbounded, and every hour results in many years of human technological progress when measured in today's terms, enhancing the entity's ability to compute and to think still faster.

Eventually, the singularity could spread throughout the observable universe, reclaiming dumb matter and energy for use in an ever-expanding optimally-engineered mind, growing at the fastest speed at which information can travel. A post-singularity race could saturate the entire universe with intelligence, and would seem indistinguishable from magic – or God – to another non-transcendent intelligence.

Day 099 - Staying around to see it

Submitted by Sam on 28 August, 2011 - 02:57

The future is a very exciting place. The exponential progression of technology points to a revolution that will not only improve living standards around the world, but reduce and then cure global poverty. Nanotechnology will provide clean drinking water where before there was none, enable targeted drug delivery and early-warning detection and defence against disease, and allow for the cheap and efficient manufacture of any product or material that can possibly be built, using only the most basic of raw materials. Life expectancy will grow as regenerative medicine matures, and human intelligence, creativity and memory will be enhanced far beyond their natural biological limits. These tantalizing glimpses into human (and post-human) potential are logical extensions of the trends we see today, but we cannot accurately predict when they will become manifest. We can, however, confidently predict that many millions of people will die of age-related illness before the longevity breakthrough extends human life indefinitely, that many millions will die of hunger and through inadequate sanitation before nanotechnology decontaminates water and brings agricultural crops to the desert, and that many, many millions will die of disease and accident before minds and memories can be uploaded from neural architecture to non-biological substrates.

But perhaps death today need not be as tragic and irreversibly final as it is. We have technology today that can preserve a body for the technologies of the future to revive. With the expectation that a future medical technology based on advanced nanotechnology will be able to restore health in all but the most extreme of cases, we can use the premise of cryonics to store bodies in liquid nitrogen, where they will be preserved without deterioration for centuries, awaiting revival from an as-yet unrealized medical technology. Cryopreservation is based on the idea that historically the medical definition of death has changed as our medicine has advanced, and that what might constitute death today may be within the capabilities of medical technology to revive tomorrow.

At present, only tissues and small organs have been reversibly cryopreserved, and the whole premise of human cryopreservation relies on the theoretical possibility that personality and memories encoded in brain structures will persist long enough after legal death to be preserved by the process. As molecular nanotechnology researcher Ralph Merkle has pointed out however, we can evaluate cryonics in only one of two ways: we can either sign-up, and die if it doesn't work and live if it does, or we can ignore it entirely and die regardless of whether the process works or not. Given that the cryopreservation procedure can only legally begin after the pronouncement of death, there seems to be nothing but money to lose in taking the cryopreservation gamble, and the chance to cheat death and experience an indescribably advanced future to be gained if it pays off.

Day 098 - Harnessing the sun

Submitted by Sam on 27 August, 2011 - 01:42

The laws of physics impose very definite constraints on the information processing capabilities of intelligent civilizations, placing limitations on how much heat can be dissipated away from information processing systems, how fast computers can run, how much information can be stored, and what size and density storage and computational devices can be. Barring technological regression caused by anti-technological movements, laws or accident, the information processing needs of a civilization will grow exponentially until limits set by physical law are met. In order to service this expansion to the very limits of possibility, a technologically advancing civilization will need to harvest ever more and more energy with ever greater efficiency in a logical escalation which will eventually lead to a Type II Kardashev civilization, where most or all of the energy output of their local star is captured.

If a civilization survives long enough to demand the total energy output of its star, it will have to embark on a megastructural engineering project to create a network of orbiting solar collectors around the star to maximally intercept its energy output. These hypothetical structures are referred to as Dyson spheres, after the physicist Freeman Dyson who first formalized the concept of a shell-like system of orbital solar collectors as a means of achieving maximal solar energy efficiency. In its most feasible form, a Dyson sphere is more strictly a series of Dyson rings, which are huge energy-gathering satellites deployed in rings that orbit around the sun, sharing a common axis of rotation with an orbital radius approximately equal to the earth-sun distance. A similar variant is the Dyson swarm, where solar collecting satellites and space habitats are arranged in a denser and more complex patterns to intercept more of the star's output, at the risk of orbital perturbations and instability as the independent orbits of the cloud of collectors interact with each other. Additional variants of the Dyson sphere include non-orbital collectors, anchored by light sails to remain stationary in relation to the star.

The idealized Dyson sphere is the Dyson shell, which would be a shell of solid matter that would completely enclose a star and thus intercept all of its emissions, but it would be likely to require more matter to construct than is available in our entire solar system, even if all the planets and gas giants were broken down into building materials. Furthermore, no existing or theoretical material would be able to stand the compressive pressure a static, rigid sphere large enough to encompass the sun would exert on itself.

Practical Dyson spheres will therefore never be 100% efficient, with gaps between collectors allowing energy to escape and the collectors themselves absorbing and then reradiating more of the total available energy. Theoretically, the wavelengths of the reradiated energy from the collectors would be sufficiently different from the expected wavelengths of light for the star's spectral type that they could be detected at interstellar distances. If an alien civilization has constructed a Dyson sphere using the kind of materials that are available in our solar system, then the infrared radiation re-emitted by the solar collectors should be detectable from earth. If there are Kardashev Type II civilizations out there, we should be able to see them through the altered light of their star system.

Day 097 - Biophilic universes

Submitted by Sam on 26 August, 2011 - 01:44

It is almost tautologically obvious that the conditions of our universe are very finely tuned for life. The fundamental physical constants which underlie all of time and space are all very life-friendly, and if they happened to have slightly different values then they would fall outside of a remarkably narrow range which conspire to produce a universe conducive to life. The smallest difference in the electric charge of an electron, for instance, or the ratio between the masses of a proton and an electron, or the strength of the interactions between nucleons in an atom, would produce a radically different universe, with stars with drastically different physics, unable to produce the diversity of elements necessary for the building blocks of life, where the pathway to biological evolution would be forever blocked.

These facts have been compiled into the philosophical argument called the 'strong anthropic principle, which states that the universe must necessarily take on values consistent with the conditions necessary for life, otherwise there would be no intelligent observers to observe them. In an attempt to construct a plausible scientific reasoning for this effect, complexity theorist James Gardner has synthesized the startling 'selfish biocosm' hypothesis, an explanatory paradigm that accounts for the life-friendly qualities of the anthropic principle by placing the emergence of life and intelligence not as secondary and accidental adjuncts to the primary physical phenomena of the universe, but instead as crucial parts of creation itself.

In the selfish biocosm theory, the fundamental laws and constants of nature function to encode the emergence of life and intelligence at the most fundamental level, like cosmic-scale DNA. All physical laws are therefore a developmental code, providing a blueprint prescribing how the universe will grow and how it will produce life. In this scheme, life's developmental destiny is to evolve into high intelligences with a command over the physical laws of the universe themselves, able to engineer them to give birth to new, intelligent universes. The universe itself is a replicator, guided to spawning other universes with similar life-generating properties by the intelligent life it encodes. Through experimentation and engineering, intelligent life will create a directed universe-scale evolution where the laws and constants that are conducive to life are selected and optimized in each newly created universe. The universe creates life to spawn more universes to create more life.

Day 096 - Expanding inwards

Submitted by Sam on 24 August, 2011 - 23:03

Self-replicating autonomous space probes are one of the fastest theoretical ways for a civilization to colonize space. They allow an exponentially fast expansion which would rapidly move a civilization through the three stages of the Kardashev scale – a measure of the total amount of energy harnessed by a civilization. The scale relates an increase in the amount of energy a civilization has at its disposal with its level of space colonization, with Type I civilizations commanding the resources of their home planet, Type II those of their star system and Type III those of their galaxy. The Kardashev scale is a widely accepted expansion hypothesis predicting that some fraction of advanced civilizations will tend to meet their increasing energy demands through physical expansion, space travel and colonization.

An alternative hypothesis for the fate of intelligent civilizations has been proposed by futurist John Smart, which predicts that the drive towards expansion will not be satisfied by space colonization but rather by “transcension” into inner space. Contrary to expansion hypotheses, the transcension hypothesis states that all sufficiently advanced civilizations will advance using progressively less space, time, energy and matter in each step of their evolutionary development, becoming ever more miniaturized and doing more and more with less and less. The hypothesis is based on observations that complex systems frequently grow with accelerating speed through increases in both the efficiency and the density of their physical resources inputs, using fewer finite resources to encode ever more information.

The transcension hypothesis predicts that civilizations will be driven, with few exceptions, towards technologies which increase both in spatial locality and time density, as systems become smaller and faster. Computational devices will be developed beyond the nanoscale to produce femtotechnology, where information is encoded and computed by subatomic particles, perhaps using a combination of single electron transistors, photonics or spintronics. All sufficiently advanced civilizations will reduce their intelligent systems through the 25 orders of magnitude between the level of atoms and the Planck length (the smallest viable measurement), a vast inner space available for engineering almost as broad as the 30 orders of magnitude inhabited by terrestrial biological life.

Once the physical limit of computational miniaturization is reached, the only way for continued exponential growth is to reach a black-hole-like singularity of energy density. So much computational circuitry would be packed into such a small space that it would collapse into itself to form a black hole, so dense that light would not escape its gravity. According to some theories, only processed information would be able to pass from beyond the black hole's event horizon, encoded by entangled high-energy particles released when matter falls through it. With the right manipulation a black hole would be the ultimate computer, and will be the destiny of all civilizations who exhaust the computational capacities of their local resources.

Day 095 - Colonizing the Galaxy

Submitted by Sam on 24 August, 2011 - 02:49

If we do ever encounter extraterrestrial intelligence first hand, then it seems most likely to be machine intelligence rather than a biological intelligence like us. Biological beings are simply not robust enough (being too huge and too squishy) in comparison to purpose-built engineered solutions for crossing interstellar distances. It seems likely that any space-faring civilization, including our own, will eventually reach a level of sophistication where it becomes more effective and more economical to send intelligent machines to colonize and explore the universe than it would be to send spacecraft built to sustain bulky, vulnerable biological life.

A civilization in possession of advanced nanotechnology and equipped with nano-scale universal constructors could in theory create self-replicating robot spacecraft that would be highly efficient at galactic exploration, moving from planetary system to planetary system to source the materials to build copies of itself. If the planetary system was anything like our own solar system, then the replicators would have a wealth of raw material to work with, harvesting asteroids, comets, dust and planets as appropriate. This type of machine is known as a Von Neumann probe, and is named after the mathematician John von Neumann who was one of the first to develop a mathematical theory of how machines could make replicas of themselves.

Equipped with a basic propulsion system and carrying a payload as small as a nanoscale self-replicating constructor, these probes could colonize the entire Galaxy in only 4 million years. Directed by on-board programming, perhaps even an artificial intelligence powered by nanocomputers, the probes would replicate in each new system, sending some copies further into the Galaxy and instructing others to explore the current system and make scientific observations to be transmitted back to the home planet. Equipped with a universal constructor, some probes could be programmed to terraform suitable planets, creating life-sustaining environments to be 'seeded' with a biological payload (which could either be synthesized from the molecular level up using genomic instructions stored on-board, or alternatively derived from stored frozen embyros) to create a living colony of biological pioneers tended by artificial sentinels. This process would repeat with exponential rapidity to create swarms of trillions of self-replicating probes to allow a race to explore and colonize a vast area of space in an astonishingly short amount of time.

As with the risk of a replicator-induced earth-bound grey-goo scenario, Von Neumann probes carry the threat of dramatic misfirings which could consume entire planetary systems in obeyance of their directive to reproduce. Whilst artificially intelligent probes would be resistant to error, there is always the risk of mutation in the replication process, just as there is mutation in biological reproduction. A cosmic ray could cause a misalignment of the atomic architecture of the probe during its construction, creating a mutation that would eventually evolve a new “species” of probe, potentially with a different interpretation of its programming. All it would take is one of the trillions of probes to malfunction for the Galaxy to be threatened by a technological cancer.

Day 094 - Rare Earth

Submitted by Sam on 23 August, 2011 - 00:11

Most arguments for the existence of extraterrestrial life rest on the 'principle of mediocrity', which states that the properties and evolution of our solar system, including the processes that led to life on Earth, are not unusual in any important way and could be common throughout the Universe. The principle of mediocrity's counter argument is the 'rare earth hypothesis', which concludes that all of the conditions which conspire to permit complex life on Earth are exceptionally rare. The rare earth hypothesis has been presented as a plausible solution to the Fermi paradox, and an application of it concludes Stephen Webb's book Where is Everybody?.

In order to support life as we know it, a star system needs to be located in a very specific segment of the galaxy. It must be close enough to the galactic centre that its star contains a certain proportion of heavy elements, which are crucial for the formation of rocky planets containing the molecular components for life's building blocks. Additionally, the planetary system needs to be located far enough away from the dangerous centre of the galaxy, away from its high levels of radiation and its supermassive black hole, so that carbon-based life can develop. This is known as the galactic habitable zone, and it may only encompass around 20% of the stars in our galaxy; the other 80% are exposed to conditions that make the evolution of complex lifeforms unviable.

Of the stars in the galactic habitable zone, perhaps only about 5% are like our sun – stable, not too bright and not too dull, and so we need only consider planets orbiting sun-like stars in our search for complex life in the form that we know already. And then of these planets, we need only consider those with circular orbits which allow them to remain in their star's continuously habitable zone for billions of years at a time – that is, those planets without erratic orbits and which are close enough to their star to maintain liquid water on their surface for large, continuous periods of time. As a star's habitable zone moves outward as it ages, as the star grows hotter and brighter with time, a planet needs to be in a very particular orbit to avoid either permanently icing over or burning up over the course of the billions of years required for complex life to evolve. Perhaps, as Webb suggests, as few as 0.1% of the remaining planets orbit this continuously habitable zone.

Now, with the conditions on these remaining planets set to support the kind of carbon-based life that we know here on Earth, we have to contend with the probability that life will evolve. In itself, this could be an exceptionally rare event, or it could be a probable occurrence whenever conditions permit. We don't know, but if we say that the chance of life evolving now that everything is prepared is 0.05, then there are half a million planets in our galaxy supporting life.

Whittling the chance that these planets will evolve intelligent life will be the disasters and mass-extinctions that life cannot recover from. Asteroid impacts, global glaciation and supervolcanoes are potential culprits, and may account for up to 20% of all life-bearing planets permanently losing their inhabitants, or at least preventing the formation of complex multi-cellular life. In fact, the evolution of eukaryotic cells (those containing complex structures like mitochondria or chloroplasts) from prokaryotic cells (those without a nucleus or other complex structures) took many millions of years on earth, and is by no means necessarily inevitable. It may not happen on many of these worlds at all, and they may never see multicellular life. Webb estimates that one in forty life-viable planets might have the conditions that conspire to permit multicellular life to evolve from single-celled ancestors, but he cautions that this (like all of the other variables) is merely a guess.

Finally, in order for these planets to bear the kind of advanced, abstractedly intelligent civilizations that could have developed the technologies to contact or visit us, Webb believes that the remaining planets need to have the conditions necessary for complex language to develop – the crucial enabling step which allowed our species to truly master technology. This final criterion is perhaps sufficiently unlikely to evolve that it has only ever happened once, here on Earth. In this scheme, there are still planets in our galaxy where life is common, but it is frequently only unicellular, rarely multicellular, and uniquely rarely intelligent. This model of the universe leaves us with many fabulously diverse extraterrestrial ecosystems to explore, but ultimately alone as an advanced intelligence. It neatly explains why we have never heard nor seen an alien.

Syndicate content
Drupal theme by Kiwi Themes.