Linux Evolution Reveals Origins of Curious Mathematical Phenomenon

Linux Evolution Reveals Origins of Curious Mathematical Phenomenon

first_img(PhysOrg.com) — Zipf’s law is a testament to the order in our world, showing that the same patterns emerge in a wide variety of situations. The linguist George Kingsley Zipf first proposed the law in 1949, when he noticed that the distribution of words in a newspaper, book, or other literary article always followed the same pattern. As the researchers explain, the Linux network is constantly changing: new packages enter, some disappear, and others gain or lose connectivity. Yet throughout the 12 years, the distribution of packages, as ranked by their number of incoming links from other packages, has followed Zipf’s law, with a few very popular packages having much greater connectivity than most. While many previous models of Zipf’s law start with the assumption that the set of entities (e.g. packages) appeared at the same time, the Swiss researchers track the time evolution of package connectivity in the Linux network since 1996. This perspective enabled them to test for the presence of specific characteristics of the growth of the Linux network, which leads to the emergence of Zipf’s law. Using the data, they showed that the growth rates of connectivities between packages are proportional to the degree of connectivity between packages. In addition, they showed empirically that the average growth rate of the total number of links to a given package over a time interval is proportional to that time interval. Further, the variability of the total number of links to a given package increases proportionally to the square-root of time, providing a crucial test of the mechanism of stochastic proportional growth of connectivity between packages. Altogether, these characteristics are responsible for the universal distribution pattern of Zipf’s law.“We show that the distribution of connectivity of new entrants is also a power law with an exponent much bigger than 1, confirming that the proportional growth mechanism is solely responsible for the Zipf’s law,” Maillart said.He explained that, while Linux data allowed the researchers to confirm the origins of Zipf’s law, their results bring up more questions.“Linux Debian gave us the opportunity to verify the ‘proportional mechanism,’ thanks to an important dataset and a huge investigation potential,” Maillart said. “All changes (evolution) in open source software are freely available and therefore can be tracked in detail. However, model verification has brought one answer and many resulting questions we intend to give an answer to. We think particularly of mechanisms of success/failure of projects in relation with their management. “Remember that we still do not clearly understand the reasons of the success of the open source, since it’s free and based on altruist contributions by programmers,” he said. “Additionally, one can bet that further research in this direction (open source and proportional growth) may raise useful questions for other systems (cities, economy, etc.) that would bring new insights to explain their evolution.”More information: T. Maillart.; D. Sornette; S. Spaeth, and G. von Krogh. “Empircal Tests of Zipf’s Law Mechanism in Open Source Linux Distribution.” Physical Review Letters 101 218701 (2008).Copyright 2008 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Zipf counted how many times each word appeared, and found that the probability of the occurrence of words starts high and tapers off. Specifically, the most frequent word occurs about twice as often as the second most frequent word, which occurs about twice as often as the fourth most frequent word, and so on. Mathematically, this means that the frequency of any word is inversely proportional to its rank. When the Zipf curve is plotted on a log-log scale, it appears as a straight line with a slope of -1.Since Zipf’s discovery, researchers have found that the power law describes many other natural and human phenomena, including the distribution of cities ranked by their population, the distribution of corporate wealth, and Internet traffic characteristics.When analyzing systems that follow Zipf’s law, researchers usually assume certain mechanisms to be responsible for this patterned behavior. However, no one has ever empirically demonstrated that these assumed mechanisms are indeed the origin of Zipf’s law. Now, a team of researchers from ETH Zürich (the Swiss Federal Institute of Technology Zürich) in Switzerland has confirmed that these assumed mechanisms – such as scale-free, proportional growth rates – are at the origin of Zipf’s law. The researchers used four orders of magnitude of data detailing the evolution of open source software applications created for a Linux operating system to confirm the assumption.The team studied Debian Linux, a free operating system continuously being developed by more than 1,000 volunteers from around the world. Developers create software packages, such as text editors or music players, that are added to the system. Beginning with 474 packages in 1996, Debian Linux has expanded to include more than 18,000 packages today. The packages form an intricate network, with some packages having greater connectivity than others, as defined by how many other packages depend on a given package. “Open source offers a unique opportunity provided by the high completeness of data concerning open source (thanks to the disclosure policy of the open source terms of license),” lead author Thomas Maillart of ETH Zürich told PhysOrg.com. “Debian Linux allowed us to retrieve exhaustive information from several years ago. Many other complex systems are not so well ‘documented.’” When the Zipf curve is plotted on a log-log scale, it appears as a straight line with a slope of -1. This graph shows that four Debian Linux releases each follow Zipf’s law: Woody (orange), Sarge (green), Etch (blue) and Lenny (black). Credit: T. Maillart, et al.center_img This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: Linux Evolution Reveals Origins of Curious Mathematical Phenomenon (2008, December 1) retrieved 18 August 2019 from https://phys.org/news/2008-12-linux-evolution-reveals-curious-mathematical.htmllast_img read more

Researchers figure out how to outperform natures photosynthesis

first_img Journal information: Proceedings of the National Academy of Sciences (PhysOrg.com) — The Proceedings of the National Academy of Sciences (PNAS) last week published a paper titled “Solar hydrogen-producing bionanodevice outperforms natural photosynthesis.” The authors are Carolyn E. Lubner, Amanda M. Applegate, Philipp Knörzerb, Alexander Ganagoc, Donald A. Bryantc, Thomas Happe and John H. Golbeck. They modified the photosynthetic proteins found in cyanobacteria — bacteria which gain their energy through photosynthesis. Nanoparticles help scientists harvest light with solar fuels Says io9: “They frankensteined together proteins from Synechococcus sp. with those from Clostridium acetobutylicum using molecular wire to create a ‘hybrid biological/organic nanoconstruct’ that was more efficient than either on their own.”These researchers have created a tiny solar-powered device that works twice as fast as nature to produce hydrogen biofuel. In describing their research they say that although solar biohydrogen systems using photosystem I (PSI) have been developed, few attain the electron transfer throughput of oxygenic photosynthesis. They say they optimized a nanoconstruct that tethers FB, the terminal [4Fe-4S] cluster of PSI from Synechococcus sp. PCC 7002, to the distal [4Fe-4S] cluster of the [FeFe]-hydrogenase (H2ase) from Clostridium acetobutylicum. “On illumination, the PSI-[FeFe]-H2ase nanoconstruct evolves H2 at a rate of 2,200 ± 460 μmol mg chlorophyll-1 h-1, which is equivalent to 105 ± 22 e-PSI-1 s-1. Cyanobacteria evolve O2 at a rate of approximately 400 μmol mg chlorophyll-1 h-1, which is equivalent to 47 e-PSI-1 s-1, given a PSI to photosystem II ratio of 1.8. “The greater than twofold electron throughput by this hybrid biological/organic nanoconstruct over in vivo oxygenic photosynthesis validates the concept of tethering proteins through their redox cofactors to overcome diffusion-based rate limitations on electron transfer.” The researchers are among scientists in general who are looking at photosynthesis to invent materials and design new processes that can help save our planet. Associate Professor John Stride, of the University of New South Wales, commented to the ABC that “nature has had millennia to solve problems, and photosynthesis is very efficient.”In turning to biomimicry, scientists are designing devices based on photosynthesis. As for the study authors, in making their biofuel device they replaced the FNR enzyme with hydrogenase.One of the co-authors, Penn State Professor Donald Bryant, said there are good prospects for using some of these biological photosynthesis systems to produce biofuels for the future. More information: Solar hydrogen-producing bionanodevice outperforms natural photosynthesis, PNAS, Published online before print December 12, 2011, doi: 10.1073/pnas.1114660108AbstractAlthough a number of solar biohydrogen systems employing photosystem I (PSI) have been developed, few attain the electron transfer throughput of oxygenic photosynthesis. We have optimized a biological/organic nanoconstruct that directly tethers FB, the terminal [4Fe-4S] cluster of PSI from Synechococcus sp. PCC 7002, to the distal [4Fe-4S] cluster of the [FeFe]-hydrogenase (H2ase) from Clostridium acetobutylicum. On illumination, the PSI–[FeFe]-H2ase nanoconstruct evolves H2 at a rate of 2,200 ± 460 μmol mg chlorophyll-1 h-1, which is equivalent to 105 ± 22 e-PSI-1 s-1. Cyanobacteria evolve O2 at a rate of approximately 400 μmol mg chlorophyll-1 h-1, which is equivalent to 47 e-PSI-1 s-1, given a PSI to photosystem II ratio of 1.8. The greater than twofold electron throughput by this hybrid biological/organic nanoconstruct over in vivo oxygenic photosynthesis validates the concept of tethering proteins through their redox cofactors to overcome diffusion-based rate limitations on electron transfer. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: Researchers figure out how to outperform nature’s photosynthesis (2011, December 20) retrieved 18 August 2019 from https://phys.org/news/2011-12-figure-outperform-nature-photosynthesis.html Explore further © 2011 PhysOrg.comlast_img read more

Noglasses 3D technology to showcase at CES 2012

Noglasses 3D technology to showcase at CES 2012

first_img(PhysOrg.com) — Stream TV Networks plans to introduce a line of products that feature 3-D viewing without glasses. What’s so special about its announcement, on top of scores of 3-D-without-glasses announcements? The company says it has special technology in the name of Ultra-D, which can do nothing less than shift the way people will view media, according to its CEO. © 2011 PhysOrg.com Citation: No-glasses 3-D technology to showcase at CES 2012 (2011, December 26) retrieved 18 August 2019 from https://phys.org/news/2011-12-no-glasses-d-technology-showcase-ces.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore furthercenter_img Sony, Panasonic, Samsung in 3D glasses deal Ultra- D is the company’s display technology that can carry out realtime conversion of 2-D to 3-D without necessitating the use of special glasses for viewing. What’s more, the technology enables the realtime conversion of 3-D content with glasses to 3-D content without glasses. The company’s press announcement describes this approach as autostereoscopic 3-D imagery. The company’s Ultra-D is dependent upon custom hardware, middleware techniques and software algorithms to give viewers the instant conversions. The company says Ultra-D works with Blu-ray, DVD, PC gaming, Internet, cable and satellite content. The technology allows users the freedom to customize the 3-D effect as well. This will address, the company says, individual differences in spatial perception and reactions in eye comfort.Complaints about 3-D eye strain and a general reluctance toward wearing special glasses for home viewing appear to be proactively addressed by the company, eager to accent the positive about 3-D’s future. “Our ultimate goal was to create a solution that addresses existing concerns impeding the adoption of 3-D consumer aversion to expensive glasses, viewer discomfort, variance in individual vision and preference and the slow creation of 3-D content,” said Mathu Rajan, CEO, in announcing a product line-up.Stream TV Networks plans to unveil Ultra-D at the upcoming CES 2012, from January 10 to January 13. His company’s Ultra-3-D brand, according to the press release, includes 3D-enabled TVs, tablets, PCs, smartphones, digital signage and picture frames. More information about this product line will be revealed at the company’s January 9 press conference at CES 2012.Rajan believes his company’s technology is going to turn the corner on a scale comparable to when consumers transitioned from black and white to color TV.If so, he will prove to have more vision than media industry skeptics who may see 3-D television as a highly touted technology but not highly entrenched in people’s homes.According to a November survey by Retrevo, 55 percent of those who said they were going to buy an HDTV in 2012 said they will not buy a 3-D TV; respondents partly blamed their reluctance on a paucity of content and requirement to wear 3-D glasses. In interviews elsewhere, media company executives have said they thought 3-D television was still a niche but they also saw the possibility of a rise in an installed base of 3-D TVs with Internet connectivity. They see more 3-D action after 2012. More information: Press releaselast_img read more

Stanford trio explore success formula for Reddit posts

Stanford trio explore success formula for Reddit posts

first_img More information: What’s in a name? Understanding the Interplay between Titles, Content, and Communities in Social Media, PDF, i.stanford.edu/~julian/pdfs/icwsm13.pdfAbstractCreating, placing, and presenting social media content is a dif?cult problem. In addition to the quality of the content itself, several factors such as the way the content is presented (the title), the community it is posted to, whether it has been seen before, and the time it is posted determine its success. There are also interesting interactions between these factors. For example, the language of the title should be targeted to the community where the content is submitted, yet it should also highlight the distinctive nature of the content. In this paper, we examine how these factors interact to determine the popularity of social media content. We do so by studying resubmissions, i.e., content that has been submitted multiple times, with multiple titles, to multiple different communities. Such data allows us to ‘tease apart’ the extent to which each factor in?uences the success of that content. The models we develop help us understand how to better target social media content: by using the right title, for the right community, at the right time.via BusinessInsider Citation: Stanford trio explore success formula for Reddit posts (2013, September 3) retrieved 18 August 2019 from https://phys.org/news/2013-09-stanford-trio-explore-success-formula.html Submissions of the same image (a bear riding Abraham Lincoln, top right) to Reddit, with different titles. The top plot shows how successful each submission is, along with our community model’s prediction of its popularity without knowing the title. Our community model accounts for factors such as the number of previous submissions of the same content, the community (i.e., ‘subreddit’) the image is submitted to, and the time of day of the submission. Credit: Himabindu Lakkaraju, Julian McAuley, Jure Leskovec (Phys.org) —Marketers and other new media workers hoping for traffic payoffs will want to study a paper by three Stanford University scientists who have pulled apart the nature of Reddit posts to determine what, beyond quality of content, constitutes success. “What’s in a name? Understanding the Interplay between Titles, Content, and Communities in Social Media,” shows what a clever use of mathematics can do in getting to the secret sauce of a successful post. The study by Himabindu Lakkaraju, Julian McAuley, and Jure Leskovec found that important factors include where you post on Reddit, when you post on Reddit, and last but not least what title you give the post on Reddit. “Creating, placing, and presenting social media content is a difficult problem. In addition to the quality of the content itself, several factors such as the way the content is presented (the title), the community it is posted to, whether it has been seen before, and the time it is posted determine its success.” Title V coverage varies across states for diabetes © 2013 Phys.org Reddit is a news and entertainment website where registered users submit content in the form of a link or text post. Others vote up or down on the submission, and a rank and position on the site emerge. Content is organized by areas of interest, or “subreddits.” The site averaged more than 3 billion monthly page views per month during 2012.The authors explained how they went about their investigation. “We proposed a novel dataset of images posted to reddit.com, each of which has been submitted multiple times, with multiple titles. We developed community models to account for effects other than the title, such as the choice of community the image is submitted to and the number of times it has been submitted. We also developed language models that account for how the success of each submission was in?uenced by its title, and how well that title was targeted to the community. These models allowed us to study features of good title, and to understand when, where, and how a submission should be targeted.”Beyond content, said the authors, one of the key factors that make or break posting success is how the user fashions the post’s title. “How should we phrase the title of our submission? Should the title be long or short, speci?c or generic, unique or derivative, ?owery or plain?” wrote the authors, stating this was part of the challenge of coming up with a successful post on Reddit. They further noted that the length of a title does not signi?cantly impact the popularity of a submission’s success, unless the title is either extremely long or extremely short. Also, they found that “‘positive’ sentiment contributes to a title’s popularity in certain communities.” Even parts of speech played a role. “Certain communities may favor highly descriptive, adjective laden titles, while others may prefer simpler titles consisting of a few nouns.” Overall, the takeaway from their investigation is presented: “Indeed, we con?rm our intuition that good content ‘speaks for itself’, and can achieve popularity regardless of what title is used. Choosing a good title has a secondary—though still important—effect. We ?nd that features such as length, descriptiveness, and even sentence structure can be predictive of whether a title will be successful. We ?nd that the choice of community also plays a major role.”The value in their research focused on Reddit may have lessons that can be carried over to other types of social media submissions as well. It is not always easy to assess what may drive the success of various media submissions—is it really the content alone or the title, or timing, or smart targeting to the right community. “We have developed models that disentangle the interplay between these factors.” Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Future quantum technologies may exploit identical particle entanglement

Future quantum technologies may exploit identical particle entanglement

first_imgIllustration of entanglement between two identical particles with overlapping wave functions. Credit: Lo Franco and Compagno. ©2018 American Physical Society Explore further © 2018 Phys.org Citation: Future quantum technologies may exploit identical particle entanglement (2018, June 15) retrieved 18 August 2019 from https://phys.org/news/2018-06-future-quantum-technologies-exploit-identical.html Journal information: Physical Review Letters Usually when physicists perform quantum entanglement between particles—whether it be qubits, atoms, photons, electrons, etc.—the particles are distinguishable in some way. Only recently have physicists demonstrated the feasibility of generating entanglement between particles that are completely identical. Interestingly, this entanglement exists just because of the indistinguishability of the particles, without any interaction between them.center_img More information: Rosario Lo Franco and Giuseppe Compagno. “Indistinguishability of elementary systems as resource for quantum information processing.” Physical Review Letters. DOI: 10.1103/PhysRevLett.120.240403, Also at arXiv:1712.00706 [quant-ph] Now in a new paper, physicists have gone a step further, showing that the entanglement between identical particles can be harnessed and potentially used for quantum applications.The physicists, Rosario Lo Franco and Giuseppe Compagno at the University of Palermo, Italy, have published a paper in which they show the usefulness of identical particle entanglement in a recent issue of Physical Review Letters.As the physicists explain, in order for two independently prepared, identical particles to be entangled, they must share a region of space in close physical proximity—more technically, the particles’ wave functions must spatially overlap, at least partially. If there is no spatial overlap, then there is no entanglement. If there is partial spatial overlap, and measurements are made within the overlap region, then there is conditional entanglement with a certain probability. Only when the wave functions exhibit complete spatial overlap is there always entanglement, though the amount of entanglement depends on both the measurement and the shape of the wave functions.The main result of the new study is that the physicists developed a procedure to directly extract the entanglement that occurs when the wave functions completely overlap, and then use this entanglement as a resource for various applications. To do this, they extended the concept of LOCC (local operations and classical communication), which is typically used to quantify the entanglement between distinguishable particles, to indistinguishable ones. This required defining spatial LOCC, or sLOCC, operations, which makes it possible to quantify the exploitable entanglement contained in a state of independently prepared identical particles and use it—for example, for a teleportation protocol. “Our study indicates that a basic entangling mechanism can be realized by simply bringing independent identical particles to spatially overlap and accessing the entanglement by sLOCC operations,” Lo Franco told Phys.org. “This operational approach is what is desirable for experiments.” The physicists expect that it should be possible to experimentally carry out the new procedure using straightforward methods. “For instance, a simple implementation may be obtained by a modified Hanbury Brown and Twiss photonic setup with orthogonal polarizers placed before detection,” said Compagno.In the future, this new kind of entanglement could have applications in areas ranging from quantum networks to Bell experiments, among other uses. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Physicists settle controversy over identical particle entanglementlast_img read more

Electronic noise due to temperature difference in atomicscale junctions

Electronic noise due to temperature difference in atomicscale junctions

first_img More information: Ofir Shein Lumbroso et al. Electronic noise due to temperature differences in atomic-scale junctions, Nature (2018). DOI: 10.1038/s41586-018-0592-2 Elke Scheer et al. Unexpected noise from hot electrons, Nature (2018). DOI: 10.1038/d41586-018-06932-x A. Hamo et al. Electron attraction mediated by Coulomb repulsion, Nature (2016). DOI: 10.1038/nature18639 A single-quantum mechanical transport channel constituted the resulting junctions wherein electrons could be transmitted from one electrode to the other. The probability of electron transfer could be adjusted by varying the openness of the channel. An ideal test bed setup was thus provided to explore the properties of noise contribution so far overlooked. When a temperature difference was applied between the two electrodes, the authors observed a strong increase in electronic noise compared with electrodes at the same temperature. The new noise, termed ‘delta-T noise,’ scaled with the square of the temperature difference, exhibiting similar dependence on electrical conductance as shot noise. Citation: Electronic noise due to temperature difference in atomic-scale junctions (2018, October 30) retrieved 18 August 2019 from https://phys.org/news/2018-10-electronic-noise-due-temperature-difference.html Experimental setup and noise contributions. a) Schematic of the break junction setup and the gold-hydrogen (Au/H2) junction. b) Illustration of standard shot noise, thermal noise and delta-T defined above, generated in atomic-scale junctions, e is electron charge. Credit: Nature, doi: 10.1038/s41586-018-0592-2. © 2018 Science X Network More than a century ago, in 1918, German Physicist Walter Schottky published a paper describing causes and manifestations of noise in electrical measurements. In the publication, Schottky showed that an electric current produced by an applied voltage was noisy, even at absolute zero temperature, when all random heat-induced motion had stopped. The noise was a direct consequence of quantized electric charge that arrived in discrete units. The noise was termed ‘shot noise,’ as it resulted from the granularity of the charge flow. In systems that are in thermal equilibrium, noise with distinctly different properties from shot noise came into play at non-zero temperatures known as Johnson-Nyquist noise. Shot noise is now a key tool to characterize nanoscale electrical conductors, since it contains information on quantum-transport properties that cannot be revealed via mere electric-current measurements. In the study, the authors studied junctions composed of single atoms or molecules suspended between a pair of gold electrodes. The electrodes were fabricated by breaking a thin gold wire into two parts and bringing them gently back in to contact. In this process, hydrogen molecules were evaporated onto the device, known as a mechanically controllable break junction, to capture individual atoms or molecules between the electrode tips and establish an electrical contact. A mysterious insulating phenomenon in a superconductor Three types of electronic noise. Experimental setup where single atoms or molecules are suspended between the tips of two electrodes. a) At a non-zero temperature (red) electrons flow between the two electrodes (arrows). The associated electrical signal contains thermal noise, which varies linearly with electrical conductance (shown in units of quantum conductance). b) If a voltage is applied to the device, electrons flow from one electrode to another and can be backscattered from the atom or molecule. The resulting signal contains ‘shot’ noise that is present even when the device is at absolute zero temperature (blue). Shot noise has a characteristic (non-monotonic) dependence on conductance. c) If a temperature gradient is applied to the device (indicated from temperatures rising through blue to purple and red) electrons flow from both electrodes and can be backscattered. The study showed that the resulting electrical signal contained a previously unreported type of noise, named delta-noise. This noise depends on conductance similar to shot noise. Credit: Nature News and Views, doi: https://www.nature.com/articles/d41586-018-06932-x Journal information: Nature This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further Noise is a fundamental feature of any electrical measurement that calculates random and correlated signal fluctuations. Although noise is typically undesirable, noise can be used to probe quantum effects and thermodynamic quantities. Writing in Nature, Shein Lumbroso and co-workers now report a new type of electronic noise discovered to be distinct from all other previous observations. Understanding such noise can be essential to design efficient nanoscale electronics. The findings of the study were explained via the quantum theory of charge transport known as Landauer theory, developed in the past few decades. The theory included both shot noise and thermal noise for intensive testing at the atomic and molecular scale. The theory accurately described many experimental observations when working entirely in thermal equilibrium or when applying small voltages. On closer inspection of the theory, the authors observed that inclusion of a noise component only occurred when a temperature difference was solely applied across a junction as experimentally observed with delta-T noise. In the absence of an applied voltage, an electric current can arise due to a temperature difference via a phenomenon termed the Seebeck effect. According to the study, the delta-T noise arose from the discreteness of the charge carriers mediating the heat transport.Although the Landauer theory is widely used, surprisingly, delta-T noise was not previously observed. The present work therefore conveyed a key message that careful experimental design and rigorous analysis are required to study the details of quantum transport. In practice, quantum-transport experiments that weren’t entirely in thermal equilibrium could show strongly enhanced noise, which could be mistaken for noise arising from interactions between charge carriers or due to subtle effects. Unexpectedly high noise in electric-current measurements could be due to unintentional temperature gradients in experimental setups. In practice, the authors’ work can potentially be used to detect undesirable hot spots in electrical circuits. Future experimental focus will explore the relationship between delta-T noise and shot noise, with a nonlinear dependence on applied voltage. This phenomenon was recently observed in high-voltage experiments at atomic junctions. In combination with thermal noise, delta-T noise can be used as a probe for temperature differences in nanoscale systems. Delta-T noise is a versatile probe compared to physical sensors, not limited to a particular setup range, and that can be applied to conductors of variable sizes, including those at the atomic scale. The versatility allows delta-T noise to become an attractive tool for heat management, which includes thermoelectricity, heat pumping and heat dissipation, important for energy saving and sustainable energy production. Since temperature gradients are often unintentionally produced in electronic circuits, to prevent performance-limiting effects of delta-T noise, the temperature gradients should be minimized. The sensitivity of delta-T noise on the properties and interactions of charge carriers could become a valuable tool in quantum-transport.last_img read more

A fair run

A fair run

first_imgThe 5th edition of the Contours India – Women’s Day Run was held today, March 2nd2014, at the Sri Kanteerava Indoor Stadium, Bangalore. The popular annual run event is hosted on the occasion of International Women’s day every year by Contours India, a women’s only fitness chain. The event is aimed at raising awareness on women’s empowerment and raising funds for the Asha Foundation. Leading the way for decades of experiencing the world’s class marathon events, Bangalore is set as the ideal location for the Women’s Day Run to celebrate womanhood and bringing in change for the future of children. The event is powered by Performax by Reliance Trends and supported by Charity Partner – Asha Foundation.last_img read more

Music to the years

Music to the years

first_imgLegends of India Life Achievement Awards are conferred upon living legends who have contributed in taking our art forms to a different platform among the masses in India and abroad. One of India’s most prestigious awards shall be presented on the occasion of 15th Annual Legends of India Sangeet Mahotsav.The Sangeet Mahotsav is dedicated to Swami Vivekananda on the occasion of his 150thBirth anniversary. The music of Swami Vivekananda ‘Dhrupad’ shall be the central theme of the festival. Swami Shantatmananda, Secretary, Ramkrishna Mission, will grace the occasion. Also Read – ‘Playing Jojo was emotionally exhausting’Vivekananda was an exponent of Dhrupad and he also played the tabla, tanpura, khol and pakhawaj. Dhrupad, a genre of Hindustani classical music is said to be the oldest in use in the music tradition. To be held at the Siri Fort Auditorium December 26-27 the live performances of Uday Bhawalkar (Vocal) Bhai Baldeep Singh (Vocal), Gundecha Brothers in jugalbandi with Igino Brunori and Virginia Nicoli (Saxophone and Flute), Ustad Bahauddin Dagar (Rudra  Veena) besides a jugalbandi of flute by Ronu Majumdar and Rajan Sajan Mishra (Vocal).  Also Read – Leslie doing new comedy special with NetflixThis year, the award for music shall be conferred upon the internationally renowned Hindustani classical vocalist, Prabha Atre. The award for theatre shall be conferred upon Ebrahim Alkazi is god’s gift to theatre, synonymous with the stage and all its forms. As the director (1962–77) of the National School of Drama, Alkazi catalyzed its emergence as India’s premier theatre training institute. The award for fine arts shall be conferred upon Himmat Shah who is one of India’s renowned sculptors of terracotta and bronze, exploring materiality as well as texture. The recipients ofthe award are chosen independently by a selection committee comprising of artists, critics and eminent people associated with the arts.last_img read more

The canvas counts

The canvas counts

first_imgKorean Cultural Centre presents Hanji Impression, an exhibition introducing Hanji to India. By Korean and Indian artists Park Yeo Sang and Sharmi Chowdhury who use Hanji as the base for their artistic interpretations.Hanji literally means ‘the paper of Korea’. The main material is the fibrous skin of the mulberry. Hanji is not simply paper. It is used in a variety of ways, and has a different name according to its use. If it is glued on a door it is called a window paper. Its copy paper if it is used for a family registry book, Buddhist sutra or old books, while it becomes drawing paper. If four gracious plants or birds are drawn upon it. Also Read – ‘Playing Jojo was emotionally exhausting’As a part of exhibition, workshops on Hanji making, Hanji book making and Hanji calendar making will also take place on January 15 (11am -12.30pm and 2.30 pm – 4pm), January 23 (10 -11.30 am and 12 – 1.30 pm) January 27-28 (10-11.30am) with school students and interested children) respectively.Indian and Korean artists namely Park Yeo-Sang and Sharmi Chowdhury  who uses Hanji as the base for their artistic interpretations believes,  “Even with changing times and the dominance of smartphones and digital culture, nothing can completely replace paper. As a medium of expressing emotions, we hope that hanji undergoes a transformation at the hands of the artists to become a valuable piece of artist.”When: January 15 – February 5,  Where: Exhibition Hall, Korean Culture Centre , Lajpat Nagarlast_img read more

Rickman secretly married Rima Horton 3 years ago

Rickman secretly married Rima Horton 3 years ago

first_imgHarry Potter actor Alan Rickman married his longtime partner Rima Horton in a secret ceremony after approximately 40 years together. Representatives for the couple confirmed that they tied the knot three years ago in New York, reported Ace Showbiz.The news was revealed in an interview when Rickman, 69, was asked what was the secret to successful relationship without getting married.last_img