Android App of the Blog

Hey people,

Here is the android application of InnovatewithKK. Download and install this tiny 253KB app in your android smartphone and stay connected to the world of innovation 24×7.

Here are some screenshots-

Image        Image

 

Get it here for free-

http://www.appsgeyser.com/getwidget/innovatewithKK/

or

scan the QR code to download

Image

Thank you all for your love and support. Keep innovating.

A New Laser for a Faster Internet

A new laser developed by a research group at Caltech holds the potential to increase by orders of magnitude the rate of data transmission in the optical-fiber network—the backbone of the Internet.

Image

The study was published the week of February 10–14 in the online edition of the Proceedings of the National Academy of Sciences. The work is the result of a five-year effort by researchers in the laboratory of Amnon Yariv, Martin and Eileen Summerfield Professor of Applied Physics and professor of electrical engineering; the project was led by postdoctoral scholar Christos Santis (PhD ’13) and graduate student Scott Steger.

Light is capable of carrying vast amounts of information—approximately 10,000 times more bandwidth than microwaves, the earlier carrier of long-distance communications. But to utilize this potential, the laser light needs to be as spectrally pure—as close to a single frequency—as possible. The purer the tone, the more information it can carry, and for decades researchers have been trying to develop a laser that comes as close as possible to emitting just one frequency.

Today’s worldwide optical-fiber network is still powered by a laser known as the distributed-feedback semiconductor (S-DFB) laser, developed in the mid 1970s in Yariv’s research group. The S-DFB laser’s unusual longevity in optical communications stemmed from its, at the time, unparalleled spectral purity—the degree to which the light emitted matched a single frequency. The laser’s increased spectral purity directly translated into a larger information bandwidth of the laser beam and longer possible transmission distances in the optical fiber—with the result that more information could be carried farther and faster than ever before.

Image

At the time, this unprecedented spectral purity was a direct consequence of the incorporation of a nanoscale corrugation within the multilayered structure of the laser. The washboard-like surface acted as a sort of internal filter, discriminating against spurious “noisy” waves contaminating the ideal wave frequency. Although the old S-DFB laser had a successful 40-year run in optical communications—and was cited as the main reason for Yariv receiving the 2010 National Medal of Science—the spectral purity, or coherence, of the laser no longer satisfies the ever-increasing demand for bandwidth.

“What became the prime motivator for our project was that the present-day laser designs—even our S-DFB laser—have an internal architecture which is unfavorable for high spectral-purity operation. This is because they allow a large and theoretically unavoidable optical noise to comingle with the coherent laser and thus degrade its spectral purity,” he says.

The old S-DFB laser consists of continuous crystalline layers of materials called III-V semiconductors—typically gallium arsenide and indium phosphide—that convert into light the applied electrical current flowing through the structure. Once generated, the light is stored within the same material. Since III-V semiconductors are also strong light absorbers—and this absorption leads to a degradation of spectral purity—the researchers sought a different solution for the new laser.

The high-coherence new laser still converts current to light using the III-V material, but in a fundamental departure from the S-DFB laser, it stores the light in a layer of silicon, which does not absorb light. Spatial patterning of this silicon layer—a variant of the corrugated surface of the S-DFB laser—causes the silicon to act as a light concentrator, pulling the newly generated light away from the light-absorbing III-V material and into the near absorption-free silicon.

This newly achieved high spectral purity—a 20 times narrower range of frequencies than possible with the S-DFB laser—could be especially important for the future of fiber-optic communications. Originally, laser beams in optic fibers carried information in pulses of light; data signals were impressed on the beam by rapidly turning the laser on and off, and the resulting light pulses were carried through the optic fibers. However, to meet the increasing demand for bandwidth, communications system engineers are now adopting a new method of impressing the data on laser beams that no longer requires this “on-off” technique. This method is called coherent phase communication.

In coherent phase communications, the data resides in small delays in the arrival time of the waves; the delays—a tiny fraction (10-16) of a second in duration—can then accurately relay the information even over thousands of miles. The digital electronic bits carrying video, data, or other information are converted at the laser into these small delays in the otherwise rock-steady light wave. But the number of possible delays, and thus the data-carrying capacity of the channel, is fundamentally limited by the degree of spectral purity of the laser beam. This purity can never be absolute—a limitation of the laws of physics—but with the new laser, Yariv and his team have tried to come as close to absolute purity as is possible.

These findings were published in a paper titled, “High-coherence semiconductor lasers based on integral high-Q resonators in hybrid Si/III-V platforms.” In addition to Yariv, Santis, and Steger, other Caltech coauthors include graduate student Yaakov Vilenchik, and former graduate student Arseny Vasilyev (PhD, ’13). The work was funded by the Army Research Office, the National Science Foundation, and the Defense Advanced Research Projects Agency. The lasers were fabricated at the Kavli Nanoscience Institute at Caltech.

Battery offers renewable energy breakthrough!!

“Imagine a device the size of a home heating-oil tank sitting in your basement. It would store a day’s worth of sunshine from the solar panels on the roof of your house …” — Michael Marshak

Image

Ateam of Harvard scientists and engineers has demonstrated a new type of battery that could fundamentally transform the way electricity is stored on the grid, making power from renewable energy sources such as wind and sun far more economical and reliable.

The novel battery technology is reported in a paper published in Nature on Jan. 9. Under the OPEN 2012 program, the Harvard team received funding from the U.S. Department of Energy’s Advanced Research Projects Agency — Energy (ARPA-E) to develop the grid-scale battery, and plans to work with the agency to catalyze further technological and market breakthroughs over the next several years.

The paper describes a metal-free flow battery that relies on the electrochemistry of naturally abundant, inexpensive, small organic (carbon-based) molecules called quinones, which are similar to molecules that store energy in plants and animals.

The mismatch between the availability of intermittent wind or sunshine and the variable demand is the biggest obstacle to using renewable sources for a large fraction of our electricity. A cost-effective means of storing large amounts of electrical energy could solve this problem.

The battery was designed, built, and tested in the laboratory of Michael J. Aziz, the Gene and Tracy Sykes Professor of Materials and Energy Technologies at the Harvard School of Engineering and Applied Sciences (SEAS). Roy G. Gordon, the Thomas Dudley Cabot Professor of Chemistry and Professor of Materials Science, led the work on the synthesis and chemical screening of molecules. Alán Aspuru-Guzik, professor of chemistry and chemical biology, used his pioneering high-throughput molecular screening methods to calculate the properties of more than 10,000 quinone molecules in search of the best candidates for the battery.

Flow batteries store energy in chemical fluids contained in external tanks, as with fuel cells, instead of within the battery container itself. The two main components — the electrochemical conversion hardware through which the fluids are flowed (which sets the peak power capacity) and the chemical storage tanks (which set the energy capacity) — may be independently sized. Thus the amount of energy that can be stored is limited only by the size of the tanks. The design permits larger amounts of energy to be stored at lower cost than with traditional batteries.

By contrast, in solid-electrode batteries, such as those commonly found in cars and mobile devices, the power conversion hardware and energy capacity are packaged together in one unit and cannot be decoupled. Consequently they maintain peak discharge power for less than an hour before they are drained, and are therefore ill-suited to store intermittent renewables.

To store 50 hours of energy from a 1-megawatt power capacity wind turbine (50 megawatt-hours), for example, a possible solution would be to buy traditional batteries with 50 megawatt-hours of energy storage, but they would come with 50 megawatts of power capacity. Paying for 50 megawatts of power capacity when only 1 megawatt is necessary makes little economic sense.

“The Harvard team’s results published in Nature demonstrate an early, yet important technical achievement that could be critical in furthering the development of grid-scale batteries,” said ARPA-E Program Director John Lemmon. “The project team’s result is an excellent example of how a small amount of catalytic funding from ARPA-E can help build the foundation to hopefully turn scientific discoveries into low-cost, early-stage energy technologies.”

Intel® Edison “The SD card-sized computer”

Intel® Edison development board is a tiny, ultra-power-efficient development platform the size of an SD card that is small enough to drop into just about anything.

 Image

It can be designed to work with most any device—not just computers, phones, or tablets, but chairs, coffeemakers, and even coffee cups. The possibilities are endless for entrepreneurs and inventors of all kinds.

The Intel Edison board features a low-power 22nm 400MHz Intel® Quark processor with two cores, integrated Wi-Fi and Bluetooth*, and much more.

The Edison chip supports Linux and includes a dual-core CPU, Wi-Fi, Bluetooth LE (low energy) and an integrated app store. It will be available in “the middle of 2014″ according to Intel CEO Brian Krzanich.

The unique combination of small size, power, and rich capabilities makes the Intel Edison board a game changer, lowering the barriers to entry for thousands of visionaries.

Intel Edison board-powered devices can cooperate in highly customized and sophisticated ways. These devices don’t have to be hardwired one-trick ponies; they can house multiple apps that can be downloaded and installed just like we do with phones and tablets.

Infopill #5: “Graphene: The unexpected science in a pencil line”

 

Graphene is a two dimensional material consisting of a single layer of carbon atoms arranged in a honeycomb or chicken wire structure. It is the thinnest material known and yet is also one of the strongest. It conducts electricity as efficiently as copper and outperforms all other materials as a conductor of heat. Graphene is almost completely transparent, yet so dense that even the smallest atom helium cannot pass through it.

Image

If we stack layers of graphene on top of one another they form graphite, which is found in every pencil lead. In fact anyone who has drawn a line with a pencil has probably made some graphene. It was first studied as a limiting case for theoretical work on graphite by Phillip Wallace as long ago as 1947. The fact that electric current would be carried by effectively massless charge carriers in graphene was pointed out theoretically by Gordon Walter Semenoff , David P. DeVincenzo and Eugene J. Mele in 1984 and the name “graphene” was first mentioned in 1987 by S. Mouras and co-workers to describe the graphite layers that had various compounds inserted between them forming the so-called Graphite Intercallation Compounds or GIC’s.

The term has also been used extensively in the work on carbon nanotubes which are effectively rolled up graphene sheets. Attempts to grow graphene on other single crystal surfaces have been ongoing since the 1970’s, but strong interactions with the surface on which it was grown always prevented the true properties of graphene being measured experimentally.

Potential Applications:

LCD ‘Smart Windows’

Composite materials

Magnetism and graphene

Graphene for terahertz electronics

Graphene plasmonics

Electrochemical applications of graphene

Graphene sensors

Integrated circuits and nano-electronics

Saturable Absorber for Ultrafast Pulsed Lasers

Single Molecule Sensors

Solar Cells

 

Infopill #4: “God Particles”

Higgs Boson or the “God Particles” are quite sensational and one of the most sought after scientific discovery in Physics by Peter Higgs and Francois Englert. Image

To understand what exactly is God Particle, we have to look back in the past-

In the 1970s, physicists realized that there are very close ties between two of the four fundamental forces – the weak force and the electromagnetic force. The two forces can be described within the same theory, which forms the basis of the Standard Model. This “unification” implies that electricity, magnetism, light and some types of radioactivity are all manifestations of a single underlying force known as the electroweak force.

The basic equations of the unified theory correctly describe the electroweak force and its associated force-carrying particles, namely the photon, and the W and Z bosons, except for a major glitch. All of these particles emerge without a mass. While this is true for the photon, we know that the W and Z have mass, nearly 100 times that of a proton. Fortunately, theorists Robert Brout, François Englert and Peter Higgs made a proposal that was to solve this problem. What we now call the Brout-Englert-Higgs mechanism gives a mass to the W and Z when they interact with an invisible field, now called the “Higgs field”, which pervades the universe.

Just after the big bang, the Higgs field was zero, but as the universe cooled and the temperature fell below a critical value, the field grew spontaneously so that any particle interacting with it acquired a mass. The more a particle interacts with this field, the heavier it is. Particles like the photon that do not interact with it are left with no mass at all. Like all fundamental fields, the Higgs field has an associated particle – the Higgs boson. The Higgs boson is the visible manifestation of the Higgs field, rather like a wave at the surface of the sea.

An elusive particle

A problem for many years has been that no experiment has observed the Higgs boson to confirm the theory. On 4 July 2012, the ATLAS and CMS experiments at CERN’s Large Hadron Collider announced they had each observed a new particle in the mass region around 126 GeV. This particle is consistent with the Higgs boson but it will take further work to determine whether or not it is the Higgs boson predicted by the Standard Model. The Higgs boson, as proposed within the Standard Model, is the simplest manifestation of the Brout-Englert-Higgs mechanism. Other types of Higgs bosons are predicted by other theories that go beyond the Standard Model.

On 8 October 2013 the Nobel prize in physics was awarded jointly to François Englert and Peter Higgs “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider.”

inFORM: Latest innovation from MIT labs

inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. inFORM can also interact with the physical world around it, for example moving objects on the table’s surface. Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance.
People at MIT are currently exploring a number of application domains for the inFORM shape display. One area they are working on is Geospatial data, such as maps, GIS, terrain models and architectural models. Urban planners and Architects can view 3D designs physically and better understand, share and discuss their designs. They are collaborating with the urban planners in the Changing Places group at MIT on this (http://cp.media.mit.edu/). In addition, inFORM would allow 3D Modelers and Designers to prototype their 3D designs physically without 3D printing (at a low resolution). Finally, cross sections through Volumetric Data such as medical imaging CT scans can be viewed in 3D physically and interacted with. They would like to explore medical or surgical simulations. They are also very intrigued by the possibilities of remotely manipulating objects on the table.

Coin: The magic of innovation

Coin is a connected device that can hold and behave like the cards you already carry. Coin works with your debit cards, credit cards, gift cards, loyalty cards and membership cards. Instead of carrying several cards you carry one Coin. Multiple accounts and information all in one place.

Must Watch the video to understand the concept!!!!!

Infopill #3: Holographic Storage

Holographic Storage is the next big step in data storage. We’re looking at Star Wars-level futuristic here: Holographic memory could potentially store terabytes of data in a 1-centimeter cube. This technology enables data to be stored in three dimensions rather than two in CDs and DVDs and hence data can be stored at various depths of material giving extremely high storage capacity in very small profile.

Holography breaks through the density limits of conventional storage by going beyond recording only on the surface, to recording through the full depth of the medium. Unlike other technologies that record one data bit at a time, holography records and reads over a million bits of data with a single flash of light. This enables transfer rates significantly higher than current optical storage devices. Combining high storage densities, fast transfer rates, with durable, reliable, low cost media, make holography poised to become a compelling choice for next-generation storage and content distribution needs.
In addition, the flexibility of the technology allows for the development of a wide variety of holographic storage products that range from handheld devices for consumers to storage products for the enterprise.
Imagine having 50 hours of high definition video on a single disk, 50,000 songs on a postage stamp, or 500,000 x-rays on a credit card. Holographic storage makes it all possible.

Recording data

Image

Light from a single laser beam is split into two beams, the signal beam (which carries the data) and the reference beam. The hologram is formed where these two beams intersect in the recording medium. The process for encoding data onto the signal beam is accomplished by a device called a spatial light modulator (SLM). The SLM translates the electronic data of 0’s and 1’s into an optical “checkerboard” pattern of light and dark pixels. The data are arranged in an array or page of over one million bits. The exact number of bits is determined by the pixel count of the SLM. At the point where the reference beam and the data carrying signal beam intersect, the hologram is recorded in the light sensitive storage medium. A chemical reaction occurs causing the hologram stored. By varying the reference beam angle or media position hundreds of unique holograms are recorded in the same volume of material.

Reading data

Image

In order to read the data, the reference beam deflects off the hologram thus reconstructing the stored information. This hologram is then projected onto a detector that reads the entire data page of over one million bits at once. This parallel read out of data provides holography with its fast transfer rates.

Infopill #2: Game Theory

Image

Remember the scientist in the movie “A Beautiful Mind”? It’s based on famous Nobel Laureate Dr. John Nash and Game Theory.

Game theory is a study of strategic decision making. More formally, it is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers”. An alternative term suggested “as a more descriptive name for the discipline” is interactive decision theory. Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. The subject first addressed zero-sum games, such that one person’s gains exactly equal net losses of the other participant(s). Today, however, game theory applies to a wide range of behavioral relations, and has developed into an umbrella term for the logical side of decision science, to include both human and non-humans, like computers.

Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann’s original proof used Brouwer’s fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.