Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Fri, 12 Apr 2024 12:28:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 Stuff South Africa South Africa's Technology News Hub clean Could a video game developer win the Nobel Prize for Literature? https://stuff.co.za/2024/04/13/could-a-video-game-developer-win-the-nobel/ https://stuff.co.za/2024/04/13/could-a-video-game-developer-win-the-nobel/#respond Sat, 13 Apr 2024 08:00:00 +0000 https://stuff.co.za/?p=191832 In October 2016, the Swedish Academy announced that it was awarding the Nobel Prize for Literature to the singer-songwriter Bob Dylan for “having created new poetic expressions within the great American song tradition”. The decision sent out shockwaves: for the first time, a musician had received the most prestigious literary award on the planet. It sparked debate, with many questioning the decision and even sarcastic suggestions that novelists could aspire to winning a Grammy.

The controversy fed into much-needed debates on the boundary between poetry and song, but the question of what constitutes literature is much broader. Does it mean the same as it did in 1901 when the first Nobel Prize for Literature was awarded?

High and low culture

These questions date back far beyond 2016. In the late 1950s, a group of professors from the University of Birmingham founded a new interdisciplinary area of study, called cultural studies, in order to ask new questions: What was the role of TV and other mass media in cultural development? Is there a justification for distinguishing high and low culture? What is the relationship between culture and power?

These questions are all still relevant to current debates around literature. Often, the word “literary” is a status symbol, a seal of approval to distinguish “high” culture from more vulgar or less valuable “low” forms of culture. Comics, for example, were not invited to join the club until recently, thanks in part to a rebranding under the more respectable guise of “graphic novels”.

According to the Merriam-Webster dictionary, literature displays “excellence of form or expression and expressing ideas of permanent or universal interest”. It seems that an artist like Bob Dylan can take home the Nobel prize thanks to literature’s defining feature of “excellence of form or expression”, which is not strictly limited to the written word.

But how do we account for other language-based forms of expression? If performed works such as theatre or songwriting can be considered literature, where is the limit?

Word play: text-based video games

According to data from video game data consultancy Newzoo, more than 3 billion people play video games worldwide – almost half of the world’s population. In Spain alone, 77% of young people play videogames, making them a massively relevant form of culture. But what does this have to do with “excellence of form or expression”? To answer this question we have to look back several decades.

When the first video games were developed in the 1950s, two distinct genres emerged: one was action oriented (such as the pioneering 1958 game Tennis for Two), and the other more text based. The original written games, known as “interactive fiction”, were made up exclusively of text, and the player’s job was to read and make decisions that would determine the game’s outcome using a keyboard.

Screenshot of the game _Mystery House_ on Apple II. The colour white was created by combining green and purple, producing white in the centre, but into the other two colours at the edges.
Screenshot of the game Mystery House on Apple II. The colour white was created by combining green and purple, producing white in the centre, but bleeding into the other two colours at the edges. Wikimedia Commons

The inclusion of images in adventure games would not arrive until 1980, when Mystery House became the first “graphic adventure” game. These would reach their heyday in the 1990s: famous examples include the first two Monkey Island games (1990, 1991), Day of the Tentacle (1993), Full Throttle (1995), and Grim Fandango (1998), though there were many others. Despite technological advances, these games inherited several features from interactive fiction, including the predominant role of text.

The experience of playing one of these titles is not so different from that of a book: reading, pauses, the possibility of backtracking, and so on. The player spends most of their time in dialogue with various characters in search of information, stories, or even banter and jokes that are irrelevant to the game’s progress, much like footnotes or subplots.

Several classic adventure games even have direct links to literature: The Abbey of Crime (1987) is a Spanish adaptation of Umberto Eco’s The Name of the Rose, while the legendary insult sword fighting of The Secret of Monkey Island was written by science fiction author Orson Scott Card. In Myst (1993), the gameplay itself revolves around two books.

Literature on the screen: “story-rich” games

In more recent years, a new sub-genre of adventure games – known as “story-rich” games – has become popular thanks to independent creators and producers. In Papers, Please (2013), a border policeman in a fictional dictatorial regime deals with terrible moral dilemmas on a daily basis. In Firewatch (2016), players take the role of a forest ranger who investigates a conspiracy by walkie-talkie. In Return of the Obra Dinn (2018), the player must reconstruct a tragedy on the high seas with the help of an incomplete book and a peculiar compass. In all these cases, gameplay and visuals take a back seat to strong narratives.

Screenshot from the video game _Papers, Please_.
Screenshot from the video game Papers, Please. Papers, Please

A quintessential example is The Stanley Parable (2011), where the player takes the role of a worker in a strangely deserted office. They have to explore several corridors while trying unsuccessfully to interact with their surroundings, accompanied by the voice of an enigmatic narrator. Upon reaching a room with two open doors, the voiceover states that Stanley “entered the door on his left”.

The player can choose to follow the instructions or disobey, provoking the wrath of the narrator much like in the denouement of Miguel de Unamuno’s 1914 novel Fog, where the main character speaks directly to the author.


Read More: PlayStation unveils Community Game Help, crowdsources user gameplay


Each decision then opens up new paths leading to dozens of possible endings, similar to a “choose your own adventure” book. Its fragmentary and disordered story – as well as its playful spirit – is reminiscent of Julio Cortázar’s 1963 novel Hopscotch. The experience of playing the game is marked by postmodern literary features – as described by critics like Mikhail Bakhtin or Linda Hutcheon – including metafiction, intertextuality and parody.

One of its creators – Davey Wreden, a critical studies graduate – also created The Beginner’s Guide (2015), a game in which the player moves through levels of failed video games to learn more about their mysterious creator. In one, the player’s task consists solely of wandering through a virtual cave reading the countless comments left there by other frustrated players.

Screenshot from the videogame _The Beginner's Guide_.
Screenshot from the videogame The Beginner’s Guide. Steam/The Beginner’s Guide

In recent years, the genre of digital or electronic literature has emerged, including books with QR codes, works that can only be read with virtual reality headsets, poetry collections published as apps, and so on. These works are fundamentally based on language, begging the question of why video games cannot also fit into this category.

This debate takes on added relevance today, as digital formats are having an undeniable impact on our reading habits. Just as today we accept oral cultures or popular music as literature, perhaps one day we will do the same with interactive stories like The Stanley Parable. Writing has always tried to break away from established ideas, and we know that literature is not limited to words on paper. Sometimes it pays to disobey the voice in our heads and walk through the door on the right, the one that leads to new, unexplored possibilities.


]]>
https://stuff.co.za/2024/04/13/could-a-video-game-developer-win-the-nobel/feed/ 0
Nobel-winning quantum weirdness undergirds an emerging high-tech industry, promising better ways of encrypting communications and imaging your body https://stuff.co.za/2022/10/11/nobel-winning-quantum-weirdness-undergirds-an-emerging-high-tech-industry-promising-better-ways-of-encrypting-communications-and-imaging-your-body/ Tue, 11 Oct 2022 06:58:49 +0000 https://stuff.co.za/?p=154127 Unhackable communications devices, high-precision GPS and high-resolution medical imaging all have something in common. These technologies – some under development and some already on the market all rely on the non-intuitive quantum phenomenon of entanglement.

Two quantum particles, like pairs of atoms or photons, can become entangled. That means a property of one particle is linked to a property of the other, and a change to one particle instantly affects the other particle, regardless of how far apart they are. This correlation is a key resource in quantum information technologies.

For the most part, quantum entanglement is still a subject of physics research, but it’s also a component of commercially available technologies, and it plays a starring role in the emerging quantum information processing industry.

Pioneers

The 2022 Nobel Prize in Physics recognized the profound legacy of Alain Aspect of France, John F. Clauser of the U.S. and Austrian Anton Zeilinger’s experimental work with quantum entanglement, which has personally touched me since the start of my graduate school career as a physicist. Anton Zeilinger was a mentor of my Ph.D. mentor, Paul Kwiat, which heavily influenced my dissertation on experimentally understanding decoherence in photonic entanglement.

Decoherence occurs when the environment interacts with a quantum object – in this case a photon – to knock it out of the quantum state of superposition. In superposition, a quantum object is isolated from the environment and exists in a strange blend of two opposite states at the same time, like a coin toss landing as both heads and tails. Superposition is necessary for two or more quantum objects to become entangled.

Entanglement goes the distance

Quantum entanglement is a critical element of quantum information processing, and photonic entanglement of the type pioneered by the Nobel laureates is crucial for transmitting quantum information. Quantum entanglement can be used to build large-scale quantum communications networks.

On a path toward long-distance quantum networks, Jian-Wei Pan, one of Zeilinger’s former students, and colleagues demonstrated entanglement distribution to two locations separated by 764 miles (1,203 km) on Earth via satellite transmission. However, direct transmission rates of quantum information are limited due to loss, meaning too many photons get absorbed by matter in transit so not enough reach the destination.

Entanglement is critical for solving this roadblock, through the nascent technology of quantum repeaters. An important milestone for early quantum repeaters, called entanglement swapping, was demonstrated by Zeilinger and colleagues in 1998. Entanglement swapping links one each of two pairs of entangled photons, thereby entangling the two initially independent photons, which can be far apart from each other.

Quantum protection

Perhaps the most well known quantum communications application is Quantum Key Distribution (QKD), which allows someone to securely distribute encryption keys. If those keys are stored properly, they will be secure, even from future powerful, code-breaking quantum computers.

While the first proposal for QKD did not explicitly require entanglement, an entanglement-based version was subsequently proposed. Shortly after this proposal came the first demonstration of the technique, through the air over a short distance on a table-top. The first demonstrations of entangement-based QKD were published by research groups led by ZeilingerKwiat and Nicolas Gisin were published in the same issue of Physical Review Letters in May 2000.

These entanglement-based distributed keys can be used to dramatically improve the security of communications. A first important demonstration along these lines was from the Zeilinger group, which conducted a bank wire transfer in Vienna, Austria, in 2004. In this case, the two halves of the QKD system were located at the headquarters of a large bank and the Vienna City Hall. The optical fibers that carried the photons were installed in the Vienna sewer system and spanned nine-tenths of a mile (1.45 km).

Entanglement for sale

Today, there are a handful of companies that have commercialized quantum key distribution technology, including my group’s collaborator Qubitekk, which focuses on an entanglement-based approach to QKD. With a more recent commercial Qubitekk system, my colleagues and I demonstrated secure smart grid communications in Chattanooga, Tennessee.

Quantum communications, computing and sensing technologies are of great interest to the military and intelligence communities. Quantum entanglement also promises to boost medical imaging through optical sensing and high-resolution radio frequency detection, which could also improve GPS positioning. There’s even a company gearing up to offer entanglement-as-a-service by providing customers with network access to entangled qubits for secure communications.

There are many other quantum applications that have been proposed and have yet to be invented that will be enabled by future entangled quantum networks. Quantum computers will perhaps have the most direct impact on society by enabling direct simulation of problems that do not scale well on conventional digital computers. In general, quantum computers produce complex entangled networks when they are operating. These computers could have huge impacts on society, ranging from reducing energy consumption to developing personally tailored medicine.

Finally, entangled quantum sensor networks promise the capability to measure theorized phenomena, such as dark matter, that cannot be seen with today’s conventional technology. The strangeness of quantum mechanics, elucidated through decades of fundamental experimental and theoretical work, has given rise to a new burgeoning global quantum industry.

]]>
Quantum Cryptography Explained nonadult
Nobel prize: Svante Pääbo’s ancient DNA discoveries offer clues as to what makes us human https://stuff.co.za/2022/10/09/nobel-prize-svante-paabos-ancient-dna-discoveries-offer-clues-as-to-what-makes-us-human/ Sun, 09 Oct 2022 10:00:59 +0000 https://stuff.co.za/?p=154020 The Nobel prize in physiology or medicine for 2022 has been awarded to Svante Pääbo from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, “for his discoveries concerning the genomes of extinct hominins and human evolution”.

In other words, Pääbo has been awarded the prestigious prize for having sequenced the genomes of our extinct relatives, the Neanderthals and Denisovans, and for the fact that these discoveries have resulted in novel insights into human evolution.


Pääbo is widely regarded as having pioneered the field of ancient DNA, a research area dedicated to the recovery and analysis of DNA from historic and prehistoric remains.

Although Pääbo did his PhD in medical science at Uppsala University in Sweden in the early 1980s, he also studied Egyptology when he was at Uppsala. It was a logical next step that he took tools from molecular biology, garnered from his expertise in medical science, to better understand human prehistory.

Extracting DNA from ancient bones

Beginning in the 1980s, Pääbo studied ancient DNA in material ranging from mummified humans to extinct ground sloths. This work was technically challenging because ancient DNA is significantly degraded and can be contaminated.

In the decade that followed, he developed a series of methods and guidelines to recover and interpret authentic DNA and to minimise the risk of contamination from modern sources, especially from contemporary humans.

In the early 1990s, there was significant excitement in the field about the possibility of recovering DNA from dinosaurs. However, based on his knowledge of how DNA degrades over time, Pääbo remained sceptical that DNA could survive such a long time. He was later proven right.

For many of his colleagues, it was clear that Pääbo’s goal was always to recover Neanderthal DNA. But he took his time and carefully developed the methods for recovering and authenticating ancient DNA until these methods were mature enough to accomplish this objective.

Finally, in 1997, Pääbo and his colleagues published the first Neanderthal DNA sequences. In 2010 this was followed by the entire Neanderthal genome (that is, all the genetic information stored in the DNA of one Neanderthal).

Only a few years later, the group also published the genome from a previously unknown type of human, the Denisovans, distantly related to Neanderthals. This sequencing was based on a 40,000-year-old fragment of bone discovered in the Denisova cave in Siberia.

By virtue of being able to compare these with human genomes, one of the most important findings of Pääbo’s work has been that many modern humans carry a small proportion of DNA from Neanderthals and Denisovans. Modern humans picked up these snippets of DNA through hybridisation, when modern and archaic humans mixed, as modern humans expanded across Eurasia during the last ice age.

For example, particular Neanderthal genes affect how our immune system reacts to infections, including COVID-19. The Denisovan version of a gene called EPAS1, meanwhile, helps people survive at high altitudes. It’s common among modern-day Tibetans.

At the same time, in comparing the genomes of Neanderthals and Denisovans with those of modern humans, Pääbo and his colleagues have been able to highlight genetic mutations that are not shared. A large proportion of these are connected to how the brain develops.

By revealing genetic differences that distinguish living humans from our extinct ancestors, Pääbo’s influential discoveries provide the basis for exploring what makes us uniquely human.

  • Love Dalén is a Professor in Evolutionary Genetics, Centre for Palaeogenetics, Stockholm University
  • Anders Götherström is a Professor in Molecular Archaeology, Department of Archaeology and Classical Studies, Stockholm University
  • This article first appeared on The Conversation

]]>
Why insights of Nobel physicists could revolutionise 21st-century computing https://stuff.co.za/2016/10/31/insights-nobel-physicists-revolutionise-21st-century-computing/ Mon, 31 Oct 2016 22:00:00 +0000 https://stuff.co.za2016/10/31/insights-nobel-physicists-revolutionise-21st-century-computing/ British scientists David Thouless, Duncan Haldane and Michael Kosterlitzwon this year’s Nobel Prize in Physics “for theoretical discoveries of topological phase transitions and topological phases of matter”. The reference to “theoretical discoveries” makes it tempting to think their work will not have practical applications or affect our lives some day. The opposite may well be true.

To understand the potential, it helps to understand the theory. Most people know that an atom has a nucleus in the middle and electrons orbiting around it. These correspond to different energy levels. When atoms group into substances, all the energy levels of each atom combine into bands of electrons. Each of these so-called energy bands has space for a certain number of electrons. And between each band are gaps in which electrons can’t flow.

If you apply an electrical charge (a flow of extra electrons) to a material, its conductivity is determined by whether the highest energy band has room for more electrons. If it does have room, the material will behave as a conductor. If not, you need extra energy to push the current of electrons into a new empty band and as a result the material behaves as an insulator. Understanding conductivity is vital to electronics, since electronic products ultimately rely on components that are electric conductors, semiconductors and insulators.

What Thouless, Haldane and Kosterlitz began to predict in the 1970s and 1980s and other theorists have since taken forward is that certain materials break this rule. Instead of having a gap between bands in which electrons can’t flow, they have a special energy level between their bands where certain unexpected things are possible.

This quality only exists on the surface or edge of these materials, and is very robust. It also depends to some extent on the shape of the material – the topology, as we say in physics. It behaves identically for a sphere and an egg, for example, but would be different for something shaped like a doughnut because of the hole in the middle. The first measurements of this kind of behaviour have been taken for a current along the boundary of a flat sheet.

image-20161006-32698-ic3g8yThouless, Haldane and Kosterlitz.

Computer power

The properties of these so-called topological materials could potentially be extremely useful. Electrical currents can move without resistance across their surface, for example, even where a device is moderately damaged. Superconductors can already do this without having topological properties, but they only work at very low temperatures – meaning you use a lot of energy keeping them cool. Topological materials have the potential to do the same job at higher temperatures.

This has important implications for computing: most of the energy computers currently use is to run ventilators to cool down the heat produced by electrical resistance in the circuits. Remove this heat problem and you potentially make them many times more energy efficient. This could massively reduce their carbon emissions, for instance. It could also lead to batteries with far longer life spans. Researchers are already experimenting with topological materials like cadmium telluride and mercury telluride to bring this vision to life.

image-20161006-32737-u0u6jgCircuits in action. Titma Ongkantong

There is also the potential for a major breakthrough in quantum computing. Classical computers encode information by either applying voltage or not applying voltage to a chip. The computer reads this as a 0 or 1 respectively for each “bit” of information. You put these bits together to build up more complex information. This is how the binary system works.

With quantum computing, you deliver information to electrons instead of microchips. The energy levels of these electrons then correspond to zeros and ones just like in classical computers, but in quantum mechanics both can be true at the same time. Without getting into too much theory, this raises the possibility of computers that can process exceedingly large amounts of data in parallel and are therefore much faster.

While the likes of Google and IBM are researching how to manipulate enough electrons to create quantum computers that are more powerful than classical computers, one big obstacle is that these computers are very fragile with respect to surrounding “noise”. Whereas classical computers can cope with interference, quantum computers end up producing intolerable numbers of errors because of shaky support frames, stray electrical fields or air molecules hitting the processor even if you hold it in a high vacuum. This is the main reason why we don’t yet use quantum computers in our everyday lives.

One potential solution is to store information in more than one electron, since noise typically affects quantum processors at the level of single particles. Supposing you have five electrons all jointly storing the same bit of information, so long as the majority store it correctly, a disturbance to a single electron won’t undermine the system.

Researchers have been experimenting with this so-called majority voting, but topological engineering potentially offers an easier fix. In the same way as topological superconductors can carry a flow of electricity well enough that it doesn’t get hampered by resistance, topological quantum processors could be robust enough to be insensitive to noise problems. They could yet offer a major contribution to making quantum computing a reality. Researchers in the US are working on it.

The future

Superdrugs? Funnyangel

It might take between ten and 30 years before scientists become sufficiently good at manipulating electrons to make quantum computing possible, but they open up exciting possibilities. They could simulate the formation of molecules, for example, which is numerically too complicated for today’s computers. This could revolutionise drug research by enabling us to predict what will happen during chemical processes in the body.

To give just one other example, quantum computing has the potential to make artificial intelligence a reality. Quantum machines may be better at learning than classical computers, partly because they might be underpinned by much cleverer algorithms. Cracking AI could be a step change in human existence – for better or worse.

In short, the predictions of Thouless, Haldane and Kosterlitz have the potential to help revolutionise 21st-century computer technology. Where the Nobel committee has recognised the importance of their work in 2016, we are likely to be thanking them many decades into the future.

]]>