Tech

Tripling the energy storage of lithium-ion batteries

Posted on

 Scientists have synthesized a new cathode material from iron fluoride that surpasses the capacity limits of traditional lithium-ion batteries

Date:June 14, 2018
Source:DOE/Brookhaven National Laboratory

 

          Summary:

Scientists have synthesized a new cathode material from iron fluoride that surpasses the capacity limits of traditional lithium-ion batteries.

 

          FULL STORY


Substituting the cathode material with oxygen and cobalt prevents lithium from breaking chemical bonds and preserves the material’s structure.
Credit: Brookhaven National Laboratory 

As the demand for smartphones, electric vehicles, and renewable energy continues to rise, scientists are searching for ways to improve lithium-ion batteries — the most common type of battery found in home electronics and a promising solution for grid-scale energy storage. Increasing the energy density of lithium-ion batteries could facilitate the development of advanced technologies with long-lasting batteries, as well as the widespread use of wind and solar energy. Now, researchers have made significant progress toward achieving that goal.

A collaboration led by scientists at the University of Maryland (UMD), the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, and the U.S. Army Research Lab have developed and studied a new cathode material that could triple the energy density of lithium-ion battery electrodes. Their research was published on June 13 in Nature Communications.

“Lithium-ion batteries consist of an anode and a cathode,” said Xiulin Fan, a scientist at UMD and one of the lead authors of the paper. “Compared to the large capacity of the commercial graphite anodes used in lithium-ion batteries, the capacity of the cathodes is far more limited. Cathode materials are always the bottleneck for further improving the energy density of lithium-ion batteries.”

Scientists at UMD synthesized a new cathode material, a modified and engineered form of iron trifluoride (FeF3), which is composed of cost-effective and environmentally benign elements — iron and fluorine. Researchers have been interested in using chemical compounds like FeF3 in lithium-ion batteries because they offer inherently higher capacities than traditional cathode materials.

“The materials normally used in lithium-ion batteries are based on intercalation chemistry,” said Enyuan Hu, a chemist at Brookhaven and one of the lead authors of the paper. “This type of chemical reaction is very efficient; however, it only transfers a single electron, so the cathode capacity is limited. Some compounds like FeF3 are capable of transferring multiple electrons through a more complex reaction mechanism, called a conversion reaction.”

Despite FeF3’s potential to increase cathode capacity, the compound has not historically worked well in lithium-ion batteries due to three complications with its conversion reaction: poor energy efficiency (hysteresis), a slow reaction rate, and side reactions that can cause poor cycling life. To overcome these challenges, the scientists added cobalt and oxygen atoms to FeF3 nanorods through a process called chemical substitution. This allowed the scientists to manipulate the reaction pathway and make it more “reversible.”

“When lithium ions are inserted into FeF3, the material is converted to iron and lithium fluoride,” said Sooyeon Hwang, a co-author of the paper and a scientist at Brookhaven’s Center for Functional Nanomaterials (CFN). “However, the reaction is not fully reversible. After substituting with cobalt and oxygen, the main framework of the cathode material is better maintained and the reaction becomes more reversible.”

To investigate the reaction pathway, the scientists conducted multiple experiments at CFN and the National Synchrotron Light Source II (NSLS-II) — two DOE Office of Science User Facilities at Brookhaven.

First at CFN, the researchers used a powerful beam of electrons to look at the FeF3 nanorods at a resolution of 0.1 nanometers — a technique called transmission electron microscopy (TEM). The TEM experiment enabled the researchers to determine the exact size of the nanoparticles in the cathode structure and analyze how the structure changed between different phases of the charge-discharge process. They saw a faster reaction speed for the substituted nanorods.

“TEM is a powerful tool for characterizing materials at very small length scales, and it is also able to investigate the reaction process in real time,” said Dong Su, a scientist at CFN and a co-corresponding author of the study. “However, we can only see a very limited area of the sample using TEM. We needed to rely on the synchrotron techniques at NSLS-II to understand how the whole battery functions.”

At NSLS-II’s X-ray Powder Diffraction (XPD) beamline, scientists directed ultra-bright x-rays through the cathode material. By analyzing how the light scattered, the scientists could “see” additional information about the material’s structure.

“At XPD, we conducted pair distribution function (PDF) measurements, which are capable of detecting local iron orderings over a large volume,” said Jianming Bai, a co-author of the paper and a scientist at NSLS-II. “The PDF analysis on the discharged cathodes clearly revealed that the chemical substitution promotes electrochemical reversibility.”

Combining highly advanced imaging and microscopy techniques at CFN and NSLS-II was a critical step for assessing the functionality of the cathode material.

“We also performed advanced computational approaches based on density functional theory to decipher the reaction mechanism at an atomic scale,” said Xiao Ji, a scientist at UMD and co-author of the paper. “This approach revealed that chemical substitution shifted the reaction to a highly reversible state by reducing the particle size of iron and stabilizing the rocksalt phase.”Scientists at UMD say this research strategy could be applied to other high energy conversion materials, and future studies may use the approach to improve other battery systems.


Story Source:

Materials provided by DOE/Brookhaven National Laboratory. Note: Content may be edited for style and length.


Journal Reference:

Xiulin Fan, Enyuan Hu, Xiao Ji, Yizhou Zhu, Fudong Han, Sooyeon Hwang, Jue Liu, Seongmin Bak, Zhaohui Ma, Tao Gao, Sz-Chian Liou, Jianming Bai, Xiao-Qing Yang, Yifei Mo, Kang Xu, Dong Su, Chunsheng Wang. High energy-density and reversibility of iron fluoride cathode enabled via an intercalation-extrusion reaction. Nature Communications, 2018; 9 (1) DOI: 10.1038/s41467-018-04476-2


eSight- Glasses for the Blind

Posted on

 

These glasses help legally blind people see!

These glasses help legally blind people see! 😮Credit: esighteyewear.com/

Posted by Diply Tech on Saturday, March 17, 2018

 

What is eSight?

eSight is an amazing technological breakthrough – electronic glasses that let the legally blind actually see.

It is the only clinically validated device, in existence, that enables those living with vision loss to see, be mobile, and engage in virtually any Activity of Daily Living.

This device is worn like a normal pair of glasses, and, remarkably, restores sight for someone who is visually impaired.

Most importantly, eSight requires no surgery. Almost instantly after putting them on, an individual with legal blindness or low vision can see in virtually the same manner as someone who is fully sighted. eSight is registered with the FDA and EUDAMED, and is inspected by Health Canada. It is also the only clinically validated wearable technology of its kind.

Neuroscientist Sheila Nirenberg received a MacArthur Genius Award for figuring out, for the first time ever, how our retinas take images from the outside world and turn them into a neural “code” that the brain can understand. It started as a pure research project, but now she’s building the code into a device that could bring sight to the blind.

 

How does this revolutionary technology actually work?

After putting the electronic glasses on, eSight allows the wearer to see, almost instantly and in beautiful clarity.

In the most simplistic sense, eSight works in three steps. The high speed, high-resolution camera in the center of the device captures what a user is looking at in real time. This video feed is sent into a powerful computer in the housing of the glasses and is enhanced using proprietary algorithms. The feed is then projected in colour on the two near-to-eye OLED screens with unprecedented clarity and virtually no latency or delay.

eSighters can then optimize what they are looking at by using the remote to adjust  the color, contrast, focus, brightness and magnification (24x) features. Not only does eSight let wearers actually see, but it also allows them to be truly mobile using the patented Bioptic Tilt Capability. eSighters can tilt the eyewear device to the ideal position for them that can allow the best view of the video feed while maximizing their natural peripheral vision. This, along with short latency, ensures that the eSighter’s balance is not disturbed and no nausea occurs – a common problem faced with immersive technologies such as virtual reality headsets.

Another fun feature about eSight, is it allows individuals to take pictures, and stream video and games by plugging into a laptop, TV or tablet with an HDMI cable, or connecting with Bluetooth or WiFi. That way, whether it is streaming a favourite series at home, taking pictures of notes on the classroom board, or whipping through emails at the office, our eSighters can always be connected.

Who does eSight work for?

eSight works for the overwhelming majority of individuals with vision loss.

Today, our current eSighters live with a variety of conditions, including (but not limited to): Aniridia, Cataracts, Coloboma, Cone-Rod Dystrophy, Diabetic Retinopathy, Glaucoma, Ocular Albinism, Macular Degeneration, Retinopathy of Prematurity (ROP), Stargardt’s Disease, Optic Neuritis, Retinal Detachment, and many more.

According to the World Health Organization, there are approximately 253 million people in the world who are blind. Less than 15% of this population are profoundly or totally blind. Unfortunately, eSight cannot currently help these individuals. However, eSight can work for most of the remaining 85% of this population.

All of our eSighters come from a variety of walks of life. Our youngest user is four years old, and our oldest user is 101 years old. We are also proud to say that our eSighters are spread across over 42 different countries around the world, and counting.

Although all of our eSighters are different, they are united by their fundamental right to see; and eSight is dedicated to making that possible.

What can I do with eSight?

The short answer: virtually anything.

Not only does eSight enable people with vision loss to actually see, but it also restores their independence, confidence, self-esteem and freedom. With eSight, individuals can do almost anything they have only ever dreamt of with their newly restored sight.

With eSight, individuals can participate in virtually all Activities of Daily Living (ADLs). Here are a couple of things that some of our eSighters have been up to:

  • Seeing the faces of loved ones, in some cases for the first time
  • Excelling in school and university from being able to see the board from anywhere in the classroom
  • Plugging into their laptop to work directly from their eSight screens
  • Catching up on their favourite TV shows
  • Reading endless amounts of books
  • Traveling alone to some of the places on their bucket list
  • Being able to go back to work to help support their family
  • Watching their favorite sport teams
  • Pursuing their love of painting, drawing and sketching
  • Cooking meals for themselves and their loved ones
  • Going for a walk by themselves
  • Playing sports with their friends and family
  • Picking up previously abandoned hobbies (cards, woodworking, etc.)
  • Living their life to the fullest

 

 

The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning

Posted on

Simple explanations of Artificial Intelligence, Machine Learning, and Deep Learning and how they’re all different. Plus, how AI and IoT are inextricably connected.

We’re all familiar with the term “Artificial Intelligence.” After all, it’s been a popular focus in movies such as The Terminator, The Matrix, and Ex Machina (a personal favorite of mine). But you may have recently been hearing about other terms like “Machine Learning” and “Deep Learning,” sometimes used interchangeably with artificial intelligence. As a result, the difference between artificial intelligence, machine learning, and deep learning can be very unclear.

I’ll begin by giving a quick explanation of what Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) actually mean and how they’re different. Then, I’ll share how AI and the Internet of Things are inextricably intertwined, with several technological advances all converging at once to set the foundation for an AI and IoT explosion.

So what’s the difference between AI, ML, and DL?

Artificial Intelligence

First coined in 1956 by John McCarthy, AI involves machines that can perform tasks that are characteristic of human intelligence. While this is rather general, it includes things like planning, understanding language, recognizing objects and sounds, learning, and problem solving.

We can put AI in two categories, general and narrow. General AI would have all of the characteristics of human intelligence, including the capacities mentioned above. Narrow AI exhibits some facet(s) of human intelligence, and can do that facet extremely well, but is lacking in other areas. A machine that’s great at recognizing images, but nothing else, would be an example of narrow AI.

Machine learning

At its core, machine learning is simply a way of achieving AI.

Arthur Samuel coined the phrase not too long after AI, in 1959, defining it as, “the ability to learn without being explicitly programmed.” You see, you can get AI without using machine learning, but this would require building millions of lines of codes with complex rules and decision-trees.

So instead of hard coding software routines with specific instructions to accomplish a particular task, machine learning is a way of “training” an algorithm so that it can learnhow. “Training” involves feeding huge amounts of data to the algorithm and allowing the algorithm to adjust itself and improve.

To give an example, machine learning has been used to make drastic improvements to computer vision (the ability of a machine to recognize an object in an image or video). You gather hundreds of thousands or even millions of pictures and then have humans tag them. For example, the humans might tag pictures that have a cat in them versus those that do not. Then, the algorithm tries to build a model that can accurately tag a picture as containing a cat or not as well as a human. Once the accuracy level is high enough, the machine has now “learned” what a cat looks like.

Deep learning

Deep learning is one of many approaches to machine learning. Other approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others.

Deep learning was inspired by the structure and function of the brain, namely the interconnecting of many neurons. Artificial Neural Networks (ANNs) are algorithms that mimic the biological structure of the brain.

In ANNs, there are “neurons” which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn, such as curves/edges in image recognition. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer.

AI and IoT are Inextricably Intertwined

I think of the relationship between AI and IoT much like the relationship between the human brain and body.

Our bodies collect sensory input such as sight, sound, and touch. Our brains take that data and makes sense of it, turning light into recognizable objects and turning sounds into understandable speech. Our brains then make decisions, sending signals back out to the body to command movements like picking up an object or speaking.

All of the connected sensors that make up the Internet of Things are like our bodies, they provide the raw data of what’s going on in the world. Artificial intelligence is like our brain, making sense of that data and deciding what actions to perform. And the connected devices of IoT are again like our bodies, carrying out physical actions or communicating to others.

Unleashing Each Other’s Potential

The value and the promises of both AI and IoT are being realized because of the other.

Machine learning and deep learning have led to huge leaps for AI in recent years. As mentioned above, machine learning and deep learning require massive amounts of data to work, and this data is being collected by the billions of sensors that are continuing to come online in the Internet of Things. IoT makes better AI.

Improving AI will also drive adoption of the Internet of Things, creating a virtuous cycle in which both areas will accelerate drastically. That’s because AI makes IoT useful.

On the industrial side, AI can be applied to predict when machines will need maintenance or analyze manufacturing processes to make big efficiency gains, saving millions of dollars.

On the consumer side, rather than having to adapt to technology, technology can adapt to us. Instead of clicking, typing, and searching, we can simply ask a machine for what we need. We might ask for information like the weather or for an action like preparing the house for bedtime (turning down the thermostat, locking the doors, turning off the lights, etc.).

Converging Technological Advancements Have Made this Possible

Shrinking computer chips and improved manufacturing techniques means cheaper, more powerful sensors.

Quickly improving battery technology means those sensors can last for years without needing to be connected to a power source.

Wireless connectivity, driven by the advent of smartphones, means that data can be sent in high volume at cheap rates, allowing all those sensors to send data to the cloud.

And the birth of the cloud has allowed for virtually unlimited storage of that data and virtually infinite computational ability to process it.

Of course, there are one or two concerns about the impact of AI on our society and our future. But as advancements and adoption of both AI and IoT continue to accelerate, one thing is certain; the impact is going to be profound

Blockchain

Posted on

Blockchain technology is commonly associated with Bitcoin and other cryptocurrencies, but that’s really only the tip of the iceberg. Some people think blockchain could end up transforming a number of important industries, from health care to politics.

Whether you’re simply looking to invest in Bitcoin, trade some Ethereum, or are just intrigued about what the heck blockchain actually is, you’ve come to the right place.

Blockchain isn’t just for Bitcoin

While blockchain technology isn’t simple when you dig into the nitty-gritty, the basic idea isn’t so opaque. It’s effectively a database that’s validated by a wider community, rather than a central authority. It’s a collection of records that has a lot of people give it the thumbs up, rather than relying on a single entity, like a bank or government, which most likely hosts data on a particular server.

Each “block” represents a number of transactional records, and the “chain” component links them all together with a hash function. As records are created, they are confirmed by a distributed network of computers and paired up with the previous entry in the chain, thereby creating a chain of blocks, or a blockchain.

The entire blockchain is retained on this large network of computers, meaning that no one person has control over its history. That’s an important component, because it certifies everything that has happened in the chain prior, and it means that no one person can go back and change things. It makes the blockchain a public ledger that cannot be easily tampered with, giving it a built-in layer of protection that isn’t possible with a standard, centralized database of information.

While traditionally we have needed these central authorities to trust one another, and fulfil the needs of contracts, the blockchain makes it possible to have our peers guarantee that for us in an automated, secure fashion.

That’s the innovation of blockchain, and it’s why you may hear it used to reference things other than Bitcoin and other cryptocurrency. Though generally not used for it yet, blockchain could be used to maintain a variety of information. An organization called Follow My Vote is attempting to use it for an electronic voting system that’s more secure than modern versions, and healthcare providers might one day use it to handle patient records.

Where did blockchain come from?

Although blockchain technology has only been effectively employed in the past decade, its roots can be traced back far further. A 1976 paper on New Directions in Cryptography discussed the idea of a mutual distributed ledger, which is what the blockchain effectively acts as. That was later built upon in the 1990s with a paper entitled “How to Time-Stamp a Digital Document.” It would take another few decades and the combination of powerful modern computers, with the clever implementation with a cryptocurrency to make these ideas viable.

In order to validate the blocks in the same manner as a traditional private ledger, the blockchain employs complicated calculations. That, in turn, requires powerful computers, which are expensive to own, operate, and keep cool. That’s part of the reason that bitcoin acted as such a great starting point for the introduction of blockchain technology, because it could reward those taking part in the process with something of financial value.

Bitcoin ultimately made its first appearance in 2009, bringing together the classic idea of the mutual distributed ledger, the blockchain, with an entirely digital currency that wasn’t controlled by any one individual or organization. Developed by the still effectively anonymous “Satoshi Nakamoto,” the cryptocurrency allowed for a method of conducting transactions while protecting them from interference by the use of the blockchain.

How do cryptocurrencies use the blockchain?

Although bitcoin and the alternative currencies all utilize blockchain technology, they do so in differing manners. Since bitcoin was first invented it has undergone a few changes at the behest of its core developers and the wider community, and other alt-coins have been created to improve upon bitcoin, operating in slightly different ways.

In the case of bitcoin, a new block in its blockchain is created roughly every ten minutes. That block verifies and records, or “certifies” new transactions that have taken place. In order for that to happen, “miners” utilize powerful computing hardware to provide a proof-of-work — a calculation that effectively creates a number which verifies the block and the transactions it contains. Several of those confirmations must be received before a bitcoin transaction can be considered effectively complete, even if technically the actual bitcoin is transferred near-instantaneously.

dont worry about bitcoin regulation it cant be stopped hong kong finance economy

Anthony WallaceAFP/Getty Images

This is where bitcoin has run into problems in recent months. As the number of bitcoin transactions increases, the relatively-hard 10-minute block creation time means that it can take longer to confirm all of the transactions and backlogs can occur.

With certain alt-coins, that’s a little different. With Litecoin it’s more like two and a half minutes, while with Ethereum the block time is just 10-20 seconds, so confirmations tend to happen far faster. There are obvious benefits of such a change, though by having blocks generate at a faster rate there is a greater chance of errors occurring. If 51 percent of computers working on the blockchain record an error, it becomes near-permanent, and generating faster blocks means fewer systems working on them.

What’s the catch?

Blockchain technology has a lot of exciting potential, but there are some serious considerations that need to be addressed before we can say it’s the technology of the future.

Remember all that computing power required to verify transactions? Those computers need electricity. Bitcoin is a poster child of the problematic escalation in power demanded from a large blockchain network. Although getting exact statistics on the power requirements of bitcoin is difficult, it’s regularly compared to small countries in its current state. That’s not appealing given today’s concerns about climate change, the availability of power in developing countries, and reliability of power in developed nations.

Transaction speed is also an issue. As we noted above, blocks in a chain must be verified by the distributed network, and that can take time. A lot of time. At its worse, bitcoin’s average transaction time exceeded 41 hours. Ethereum is much more efficient, but its average time is around 15 seconds — which would be an eternity in a checkout line at your local grocery store. Blockchains used for purposes other than cryptocurrency could run into similar problems. You can imagine how frustrating it would be to wait 15 seconds every time you wanted to change a database entry.

These problems will need to be resolved as blockchain becomes more popular. However, considering we’re less than a decade on from the blockchain’s first implementation, and we’re already on the road to developing new uses for it, we remain optimistic that those involved will work out it.

 

 

Project Soli

Posted on

Project Soli is developing a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.

 

5 INVENTIONS THAT WILL BLOW YOUR MIND

Posted on

 

https://www.youtube.com/watch?v=t0R0Xr0e-uk

Future of Healthcare by Microsoft

Posted on

This is an incredible video on how Microsoft sees the future in healthcare and how technology is improving our way of life! With state of the art hospitals being built in Malaysia; it’s just a matter of time before we experience seemless healthcare delivery. Malaysia Healthcare patients use a portable Personal Health Record (PHR) called the iPHER that carries all their PHI which includes, medications, lab tests, diagnosis, immunizations, alternative procedures, digital images, dental records, ophthalmic care (lens and contact prescriptions) and DNA any where in the world with no need to access the Internet to view the information. Malaysia Healthcare currently uses this PHR to reduce medical errors and create continuity of care for all their patients and to provide seemless healthcare delivery.

IBM Healthcare Industry: 2020 Vision

Posted on

As the global healthcare industry begins to redefine value and success for a more sustainable and value-based healthcare system, this video articulates the IBM vision for Smarter Healthcare, to engage the audience in a view for their future and IBM as their partner.