Month: May 2019

  • Chance Coats presented the findings at the 2019 EuroSysConference. Credit: The Grainger College of Engineering One of the latest cyber threats involves hackers encrypting user files and then charging “ransom” to get them back. In the paper, “Project Almanac: A Time-Traveling Solid State Drive,” University of Illinois students Chance Coats and Xiaohao Wang and Assistant Professor Jian Huang from the Coordinated Science Laboratory look at how they can use the commodity storage devices already in a computer, to save the files without having to pay the ransom. “The paper explains how we leverage properties of flash-based storage that currently exist in most laptops, desktops, mobiles, and even IoT devices” said Coats, a graduate student in electrical and computer engineering (ECE). “The motivation was a class of malware called ransomware, where hackers will take your files, encrypt them, delete the unencrypted files and then demand money to give the files back.” The flash-based, solid-state drives Coats mentioned are part of the storage system in most computers. When a file is modified on the computer, rather than getting rid of the old file version immediately, the solid-state drive saves the updated version to a new location. Those old versions are the key to thwarting ransomware attacks. If there is an attack, the tool discussed in the paper can be used to revert to a previous version of the file. The tool would also help in the case of a user accidentally deleting one of their own files. Like any new tool, there is a trade-off. “When you want to write new data, it has to be saved to a free block, or block that has already been erased,” said Coats. “Normally a solid-state drive would delete old versions in an effort to erase blocks in advance, but because our drive is keeping...
  • Credit: Liu et al. Researchers at Leiden University and the National University of Defense Technology (NUDT), in China, have recently developed a new approach for image-text matching, called CycleMatch. Their approach, presented in a paper published in Elsevier’s Pattern Recognition journal, is based on cycle-consistent learning, a technique that is sometimes used to train artificial neural networks on image-to-image translation tasks. The general idea behind cycle-consistency is that when transforming source data into target data and then vice versa, one should finally obtain the original source samples. When it comes to developing artificial intelligence (AI) tools that perform well in multi-modal or multimedia-based tasks, finding ways to bridge images and text representations is of crucial importance. Past studies have tried to achieve this by uncovering semantics or features that are relevant to both vision and language. When training algorithms on correlations between different modalities, however, these studies have often neglected or failed to address intra-modal semantic consistency, which is the consistency of semantics for the individual modalities (i.e. vision and language). To address this shortcoming, the team of researchers at Leiden University and NUDT proposed an approach that applies cycle-consistent embeddings to a deep neural network for matching visual and textual representations. “Our approach, named as CycleMatch, can maintain both inter-modal correlations and intra-modal consistency by cascading dual mappings and reconstructed mappings in a cyclic fashion,” the researchers wrote in their paper. “Moreover, in order to achieve a robust inference, we propose to employ two late-fusion approaches: average fusion and adaptive fusion.” The approach devised by the researchers integrates three feature embeddings (dual, reconstructed and latent embeddings) with a neural network for image-text matching. The method has two cycle branches, one starting from an image feature in the visual space and one from a text feature in the textual space....
  • The hypothetical implementation of PixelGreen in the dense urban context of Hong Kong. Right: Potential sites that combine the vertical and horizontal surfaces (highlighted in green) of existing high-rise buildings in the development of PixelGreen. Credit: Khoo & Wee. Researchers at Deakin University and the University of Hong Kong have recently designed a hybrid green architectural wall system for high-rise buildings that integrates a vertical micro-farm and a media screen. They presented this wall, called PixelGreen, in a paper published on Research Gate. PIXEL GREEN is designed for integration into the wall surfaces of existing buildings, turning them into analogue media screens. “In this research, we explore the opportunity for new design possibilities to achieve a hybrid architectural wall system as a reciprocal retrofit for existing high-rise buildings surfaces, integrating a vertical micro-farm and media screen,” Chin Koi Khoo and H Koon Wee, the two researchers who carried out the study, told TechXplore via email. In this age of intense urbanization, a vast amount of arable land has been taken up by cities, which has significantly reduced the amount of crops produced every year. This could lead to a significant shortage of food, which might cause serious issues over the next few decades. Researchers have thus been trying to come up with alternative ways to produce crops, one of which is an interesting solution called “vertical farms.” Vertical farms and gardens are essentially buildings (e.g., skyscrapers) in which crops are grown in vertically stacked layers. This practice has recently gained popularity, particularly in densely populated urban environments. In addition to increasing the production of crops, vertical farms could foster a greater sense of community. A conceptual diagram of PixelGreen with mediated content formed by multiple species of edible plants within ‘pigeonhole pixels’. Progress through three repeatable steps—sow, grow and harvest—is...
  • Credit: CC0 Public Domain The human body’s mechanisms are marvelous, yet they haven’t given up all their secrets. In order to truly conquer human disease, it is crucial to understand what happens at the most elementary level. Essential functions of the cell are carried out by protein molecules, which interact with each other in varying complexity. When a virus enters the body, it disrupts their interactions and manipulates them for its own replication. This is the foundation of genetic diseases, and it is of great interest to understand how viruses operate. Adversaries like viruses inspired Paul Bogdan, associate professor in the Ming Hsieh Department of Electrical and Computer Engineering, and recent Ph.D. graduate, Yuankun Xue, from USC’s Cyber Physical Systems Group, to determine how exactly they interact with proteins in the human body. “We tried to reproduce this problem using a mathematical model,” said Bogdan. Their groundbreaking statistical machine learning research on “Reconstructing missing complex networks against adversarial interventions,” was published in Nature Communications journal earlier this April. Xue, who earned his Ph.D. in electrical and computer engineering last year with the 2018 Best Dissertation Award, said: “Understanding the invisible networks of critical proteins and genes is challenging, and extremely important to design new medicines or gene therapies against viruses and even diseases like cancer.” The ‘protein interaction network’ models each protein as a ‘node.’ If two proteins interact, there is an ‘edge’ connecting them. Xue explained, “An attack by a virus is analogous to removing certain nodes and links in this network.” Consequently, the original network is no longer observable. “Some networks are highly dynamic. The speed at which they change may be extremely fast or slow,” Bogdan said. “We may not have sensors to get accurate measurements. Part of the network cannot be observed and hence becomes invisible.”...
  • Credit: Egor Zakharov et al. A paper discussing an artificial intelligence feat now up on arXiv is giving tech watchers yet another reason to feel this is the Age of Enfrightenment. “Few-Shot Adversarial Learning of Realistic Neural Talking Head Models” by Egor Zakharov, Aliaksandra Shysheya, Egor Burkov and Victor Lempitsky reveal their technique that can turn photos and paintings into animated talking heads. Author affiliations include the Samsung AI Center, Moscow and the Skolkovo Institute of Science and Technology. The key player in all this? Samsung. It opened up research centers in Moscow, Cambridge and Toronto last year and the end result might well be more headlines in AI history. Yes, the Mona Lisa can look as if she is telling her TV host why she favors leave-in hair conditioners. Albert Einstein can look as if he is speaking in favor of no hair products at all. They wrote that “we consider the problem of synthesizing photorealistic personalized head images given a set of face landmarks, which drive the animation of the model.” One shot learning from a single frame, even, is possible.   Khari Johnson, VentureBeat, noted that they can generate realistic animated talking heads from images without relying on traditional methods such as 3D modeling. The authors highlighted that “Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters.” What is their approach? Ivan Mehta in The Next Web walked readers through the steps that form their technique. “Samsung said that the model creates three neural networks during the learning process. First, it creates an embedded network that links frames related to face landmarks...
  • Stanford Doggo. Credit: Kau et al. Researchers at Stanford University have recently created an open-source quadruped robot called Stanford Doggo. Their robot, presented in a paper pre-published on arXiv and set to be published by IEEE Explore, exceeds the performance of many state-of-the-art legged robots in vertical jumping agility. “About a year and a half ago, I started the Extreme Mobility sub-team at Stanford Student Robotics,” Nathan Kau, one of the researchers who carried out the study, told TechXplore. “We were interested in building agile robots that could explore environments where wheeled or flying vehicles wouldn’t be effective. A few really amazing robots that can work in these types of environments already exist, but they were quite expensive, custom designs that we wouldn’t be able to replicate. So last year, we set out to design and prototype an inexpensive four-legged robot inspired by these groups, and Stanford Doggo is the result of our efforts.” The robot developed by Kau and his colleagues has four legs, each of which is powered by two motors. Belt drives connect the motors to the axles of the leg linkages, which makes the links rotate at one-third of the motors’ speed. This speed reduction nearly triples the torque, and the ratio is low enough to ensure that forces from the environment are sensed by the motor. “This effect is similar to riding a bike at a low gear, and it’s easier to feel bumps in the road in your feet than it is at a high gear,” Kau explained. “These kinds of mechanisms, called quasi-direct drive actuators, are somewhat common now in legged robots. However, we found that few if any groups were using this type of actuator on smaller, low-cost walking robots.” Stanford Doggo is a highly agile and inexpensive robot that can be...
  • Credit: CC0 Public Domain One of the big issues with sustainable energy systems is how to store electricity that’s generated from wind, solar and waves. At present, no existing technology provides large-scale storage and energy retrieval for sustainable energy at a low financial and environmental cost. Engineered electroactive microbes could be part of the solution; these microbes are capable of borrowing an electron from solar or wind electricity and using the energy to break apart carbon dioxide molecules from the air. The microbes can then take the carbon atoms to make biofuels, such as isobutanol or propanol, that could be burned in a generator or added to gasoline, for example. “We think biology plays a significant role in creating a sustainable energy infrastructure,” said Buz Barstow, assistant professor of biological and environmental engineering at Cornell University. “Some roles will be supporting roles and some will be major roles, and we’re trying to find all of those places where biology can work.” Barstow is the senior author of “Electrical Energy Storage With Engineered Biological Systems,” published in the Journal of Biological Engineering. Adding electrically engineered (synthetic or non-biological) elements could make this approach even more productive and efficient than microbes alone. At the same time, having many options also creates too many engineering choices. The study supplies information to determine the best design based on needs. “We are suggesting a new approach where we stitch together biological and non-biological electrochemical engineering to create a new method to store energy,” said Farshid Salimijazi, a graduate student in Barstow’s lab and the paper’s first author. Natural photosynthesis already offers an example for storing solar energy at a huge scale, and turning it into biofuels in a closed carbon loop. It captures about six times as much solar energy in a year as all...
  • If you’re a Gmail user, you’re probably aware by now of a major redesign – currently optional, soon to be compulsory – that aims to tackle the problem of email overload by using artificial intelligence. One of the most annoying aspects of living alongside other humans is the way they’re constantly making demands on your emotions and attention: you have to figure out when to sacrifice your own priorities in order to help them; you’ve got to empathise with them when they’re sad or ill, and so on. Traditionally, antidotes for email overload work by filtering out messages from people you don’t care about. But the new Gmail focuses on messages from people you do care about – and promises to do some of that caring on your behalf. A new “nudge” feature will automatically decide whether your friend Belinda’s lunch invite is important enough to prompt you to hurry up and reply. The “high-priority notifications” feature will decide whether to interrupt your meeting by pinging you when your kids get in touch. And “smart replies” offers entire pre-written messages, so you can respond to news of Uncle Norbert’s latest gastrointestinal infection with a single click: “Oh no! Feel better soon!” If I’ve any criticism, it’s Google’s lack of ambition. Why stop at nudges and common brief phrases? Why can’t Gmail write whole emails, scouring my message archive for what I’ve been up to, then sending long, chatty updates on it all to far-flung friends? What about sweet little “thinking of you” messages to my partner? Or couldn’t you somehow link Gmail to databases of births, marriages and deaths, so my contacts could automatically receive missives of celebration or condolence as appropriate? I’m reminded of Flaneur, a hypothetical app imagined by the writer Curtis Brown, aimed at those who disdain...
  • Stanford Doggo. Credit: Kau et al. Researchers at Stanford University have recently created an open-source quadruped robot called Stanford Doggo. Their robot, presented in a paper pre-published on arXiv and set to be published by IEEE Explore, exceeds the performance of many state-of-the-art legged robots in vertical jumping agility. “About a year and a half ago, I started the Extreme Mobility sub-team at Stanford Student Robotics,” Nathan Kau, one of the researchers who carried out the study, told TechXplore. “We were interested in building agile robots that could explore environments where wheeled or flying vehicles wouldn’t be effective. A few really amazing robots that can work in these types of environments already exist, but they were quite expensive, custom designs that we wouldn’t be able to replicate. So last year, we set out to design and prototype an inexpensive four-legged robot inspired by these groups, and Stanford Doggo is the result of our efforts.” The robot developed by Kau and his colleagues has four legs, each of which is powered by two motors. Belt drives connect the motors to the axles of the leg linkages, which makes the links rotate at one-third of the motors’ speed. This speed reduction nearly triples the torque, and the ratio is low enough to ensure that forces from the environment are sensed by the motor. “This effect is similar to riding a bike at a low gear, and it’s easier to feel bumps in the road in your feet than it is at a high gear,” Kau explained. “These kinds of mechanisms, called quasi-direct drive actuators, are somewhat common now in legged robots. However, we found that few if any groups were using this type of actuator on smaller, low-cost walking robots.” Stanford Doggo is a highly agile and inexpensive robot that can be...