Home Blog Page 2

Mum ‘shot dead by own son, 9,’ pictured as boy charged with murder

A mum who was allegedly shot dead by her nine-year-old son has been pictured for the first time.

Pauline Randol was fatally injured in a shooting at her own home in Fawn River Township, in Michigan, US, on Monday.

Police have since charged her son, 9, with murder and he is also alleged to have previously threatened to kill an eight-year-old girl living nearby.

St. Joseph County Sheriff Bradley Balk confirmed the boy was charged with one count of open murder and one count of felony firearm.

The child – who can’t be named – is now undergoing psychiatric evaluations at a state-run juvenile facility to determine his state of mind.



Her son, 9, has since been charged with murder

Neighbour Alecia Pieronski told local news outlet WWMT that the boy had previously threatened her daughter.

She said: “He told her that he wanted to get a knife and stab her and watch her die, and watch her mother cry,”

But the accused boy’s sister, Hayley Martin, told WOOD-TV, that he is not a “bad kid”.

“He loved his mom. I don’t want people to think he did not love his mom.”

She added: “He doesn’t know what he did. He doesn’t understand what’s going on right now at all.

“He doesn’t understand why he can’t come home or anything.”



The incident happened in Fawn River Township on Monday

Sturgis Public Schools Superintendent Arthur Ebert issued the following statement: “Our community has experienced a tragic event. As a district, it is our goal to provide support to our students, staff, and the community.

“We are limited in what we can share due to privacy laws and the sensitive nature of this tragedy.

“The St. Joseph County Sheriff’s Department is leading the investigation regarding this event that occurred outside of school, including the release of information about the investigation.”

Read More

Top news stories from Mirror Online

Read More

Mum in critical condition and toddler injured after being pulled out of car wreckage

A mother is fighting for her life as her toddler suffers head injuries in hospital after they were in a car accident in Birmingham.

Two cars were left destroyed and the mother, who was trapped in her Kia, had to be cut free by firefighters.

She was driving with her 18-month-old daughter in the car at the time of the accident, who was pulled out of the wreckage by witnesses, said emergency workers.

Another woman, who was in the Audi that crashed into the Kia, also had to be released by emergency services.

Her injuries were described as ‘potentially serious’ and is being treated at Birmingham’s Queen Elizabeth Hospital, as is the other driver.

The mother is said to be in critical condition at the same hospital following the incident which occurred on Wednesday afternoon on Wolverhampton Road.

Her daughter’s injuries are not believed to be life-threatening and she was taken to Birmingham Children’s Hospital for further treatment.

A West Midlands Ambulance Service spokesman said: ‘Crews arrived to find two cars that had suffered significant damage in the collision.’

Wolverhampton Road remains closed in both directions.

West Midlands Police urged motorists to avoid the area if possible, while they carry out enquiries and make the area safe again.

Read More

Chrissy Teigen’s daughter is now a meme, of course

By Harry Hill

You could argue that Chrissy Teigen is most well-known for being a personality who is constantly photographed. And then turned into memes. It would only make sense that her children take after her, right?

The Insta-star, model, and cookbook author has two kids with singer John Legend: Luna, a 3-year-old girl, and Miles, an 11-month-old boy. Teigen is a big fan of sharing her children’s silly behavior online, whether they’re eating spaghetti or hamming it up for the camera. Between her Twitter and Instagram accounts, there’s no shortage of wildly entertaining baby content. 

When Luna was photographed with Legend on the set of The Voice, where he’s a host, Teigen was quick to upload the picture with the caption “omg me” on Twitter. 

However, the meme-worthy comparison came via her Instagram:

SEE ALSO: Chrissy Teigen got kicked out of John Legend’s ‘Game of Thrones’ viewing party

Anyone who has dabbled in memes will recognize Teigen’s infamous award show face. That viral moment occurred at the Golden Globes in 2015 and continues to circulate on Twitter today. 

Teigen once explained her weird award show reactions to Jimmy Fallon, saying, “you know how it works at these things. The camera’s, like, two feet in front of you, and the red light goes on, and as soon as that light goes on, I’m like, ‘Be normal!'” 

Luna’s on-set facial expression just goes to show the apple doesn’t fall far from the Teigen. 

Read More

Physical fitness reduces risk of lung and bowel cancers

Please accept our privacy terms

We use cookies and similar technologies to improve your browsing experience, personalize content and offers, show targeted ads, analyze traffic, and better understand you. We may share your information with third-party partners for marketing purposes. To learn more and make choices about data use, visit our Advertising Policy and Privacy Policy. By clicking “Accept and Continue” below, (1) you consent to these activities unless and until you withdraw your consent using our rights request form, and (2) you consent to allow your data to be transferred, processed, and stored in the United States.

Read More

Underwater tests reveal sharks may be smarter than you think

Sharks may be smarter than they seem. Recent experiments reveal they have a grasp of quantity and can learn cognitive skills from other sharks



Life



8 May 2019

blue sharks

Blue sharks may well be intelligent too

Chris & Monique Fallows/naturepl.com

By Ruby Prosser Scully

SHARKS may be even more calculating than they seem. They can learn cognitive skills from other sharks and recent experiments reveal they have a grasp of quantity.

Vera Schluessel at the University of Bonn in Germany and her colleagues tested how well 12 bamboo sharks could recognise different numbers of objects.

Each shark was put in a training pool with pictures of two different groups of geometric shapes projected onto a wall. The team then cycled through at least 40 objects of different shapes and shades …

Quarterly by Direct Debit

Inclusive of applicable taxes (VAT)

Existing subscribers, please log in with your email address to link your account access.

Read More

Exclusive: Google’s security cameras to drop key customization option

You won’t be able to toggle off the green status light of a Nest and Dropcam security camera, we discovered today at Google IO 2019. It’s good and bad news for users.

Whenever a Nest camera is live, the green light will shine – soon without user control that used to be there before, according to Google reps we talked to today.

It’s being done in the name of privacy from the outset on Nest Hub Max, which has a 127-degree field of view camera atop a 10-inch display. We knew this at the keynote.

Later, TechRadar found out that this will actually take away the toggle feature on the Nest Cam and Dropcam in a future software update. The ability to toggle the status light off and keep the video-capture running will permanently go away ‘soon.’

Analysis: why this is a good and bad thing

The decision to rollback the green status light toggle on the Nest app for older Nest cameras was one the team wrestled with prior to Google IO, according to the reps we talked to today. Privacy won out.

For: Always shining that green light on an active Nest Home Hub, Nest Cam or Dropcam is, on one hand, a wise decision for privacy.

It hampers Nest camera owners’ ability to spy on people (think of recent news about AirBnB hosts who have been caught spying on guests). It can also alert Nest owners if a hacker gets access to their Nest login and starts watching them. If that green light is on when it’s not supposed to be, you know something is up.

Against: A thief doesn’t have to be very smart to notice a green light and circumvent your camera – essentially stealing outside of your cone of sight. 

Example: When I was at work six months ago, I had a landlord enter my apartment to ‘show my place off’ to future tenants without much notice. He picked up some of my belongings (think: a lot of expensive technology on a shelf) and ‘inspected’ them. When I was there before, he commented how he ‘really loved my 360 camera’.

My tech didn’t end up walking, but multiple strangers did enter my NYC apartment (I couldn’t always be there when at the apartment was shown off), and multiple times people went ahead of grabbed things off my shelf for a look.

Turning on the status light in a situation like this could alert someone to the fact that they’re being recorded (not a problem) and they could easily turn their back to the camera and slip something into their pocket out of sight (problem). Or they could ‘accidentally’ knock the camera out of the way and take everything they wanted (major problem).

Google’s decision is a smart shift in privacy PR for the embattled data-hoarding firm. But it may leave some people vulnerable.

The solution? Buy more Nest Cams to get multiple angles.

Read More

关于谷歌 I/O 发布会,你想知道的一切都在这里 | TechCrunch 中文版

在今天下午举行的谷歌年度 I/O 开发者大会上,该公司在长达两小时的主题演讲中发布了一系列新产品——从新手机到下一代语音助手 Assistant——这些产品都是谷歌在过去一年开发的。

如果你没时间看完整个发布会,那也没关系,我们为你汇总了在本次发布会上亮相的最重要的产品。

Pixel 3a和 3a XL

如传言一样,谷歌推出了价格更亲民的 Pixel 3 版本。

为了拉低价格,他们把处理器规格降低了一点(从骁龙 845 降级到了骁龙 670),将存储容量限制在 64GB,并取消了无线充电功能。好消息是,保留了 3.5 毫米耳机端口。

Pixel 3a 起售价 399 美元,采用 5.6 英寸屏幕和 1220 万像素后置摄像头,运行 Android P 操作系统。Pixel 3a XL 起售价 479 美元,屏幕大小提升至 6.0 英寸。

TechCrunch 的布莱恩·希特(Brian Heater)曾在本周早些时候试用过 Pixel 3a 和 3a XL,你可以点击这里查看他的上手体验。

Nest Hub和 Nest Hub Max

谷歌智能家居设备 Home Hub 将更名为 “Nest Hub”,价格也从 149 美元下调至 129 美元。

与此同时,它还有了一个 “大兄弟”:Nest Hub Max。Nest Hub Max 的屏幕从 Nest Hub 的 7 英寸升级到了 10 英寸,同时还增加了一枚摄像头。Nest Hub Max 将绑定到 Nest 应用中,令其能像其他 Nest Cam 摄像头一样工作。谷歌说,Nest Hub Max 背面的硬件开关可以禁用摄像头/麦克风。这款产品的售价为 229 美元,今年夏天发货。

Nest Hub Max 上的新功能 “面部匹配”(Face Match)将可以识别用户的面部,对其反应进行定制。在一篇介绍该功能的博文中,谷歌表示 “‘面部匹配’ 的人脸识别是通过设备上的机器学习在本地处理的,因此摄像头数据不会离开设备。”

搜索结果中加入 AR 效果

现在,某些搜索结果中将包括 3D 模型,比如搜索某种鞋型,或是 “大白鲨”。点击这个 3D 模型,你就可以通过增强现实把它放入到现实世界的视角中。

Google Lens升级

Google Lens 正在学习一些新技能。将 Google Lens 对准餐厅的菜单,它会高亮显示餐厅中最受欢迎的菜品。将 Google Lens 对准你的收据,它会自动计算小费和总额之类的东西。

网络版 Duplex

在去年的 I/O 开发者大会上,谷歌推出了 Duplex,这是一款由人工智能驱动的客户服务工具,旨在帮助餐馆和发廊这样的小企业接听更多来电,回答常见问题,安排预订或预约各类服务。

今年,通过在网络上开放 Duplex,它的功能将进一步扩大。以网上租车为例,你只要说 “通过租车公司给我租辆车”,它就会抓取租车公司的网站,并自动开始为你预约车辆。它可以根据用户 Gmail 邮件中之前的租车确认信息,预先填写诸如旅行日期和用户偏爱的车型等信息。

新一代语音助手

谷歌已经成功地将语音识别模块从数百 GB 缩小到 0.5GB,这样一来,语音识别模块就可以直接安装在手机上。

通过将其存储在本地,他们就能够消除数据在传输至云端过程中的延迟问题,使用户与 Assistant 的会话几乎是即时完成的。当它在设备上运行时,甚至可以在飞行模式下工作。谷歌通过快速发送语音请求来展示 Assistant 的新速度,而语音指令(如 “给我叫辆 Lyft”,或是 “打开我的闪光灯”)和由此导致的行动之间几乎没有延迟。

谷歌表示,新一代语音助手 Assistant 将于今年晚些时候登陆新款 Pixel 手机。

Waze集成 Assistant

谷歌 Assistant 将被整合到地图应用 Waze,而且 “将在几周内” 发布,让用户能够通过语音来从事一些任务,比如报告事故或坑洼地带。

Assistant驾驶模式

现在,你只要说出 “嘿,谷歌,让我们开车吧”,语音助手 Assistant 就会切换到驾驶模式,这是一个极简主义的快速查看仪表盘系统,旨在开车的同时还能满足你的一些需要,比如前往日常景点的方位和音乐控制。

谷歌地图加入匿名模式

就像浏览器中的隐身或匿名(Incognito mode)模式一样,谷歌地图中的新匿名模式将避免目的地搜索或路线数据被保存到你的谷歌账户历史记录中。

实时字幕和实时答录机

Android 很快就能自动为你的手机媒体生成字幕,包括你已经保存的播客和你已经录制的视频。通过谷歌称之为 “实时答录”(Live Relay)的功能,它还可以实时录制电话通话,并允许用户通过文本进行回复。

以下是谷歌公布的 Live Relay 演示视频:

Euphonia项目

谷歌正在研究如何推广其人工智能语音算法,以更好地理解有语言障碍的用户(如 ALS 患者或中风患者),同时对个别用户的语言模型进行量身定制,以更好地帮助他们进行交流。

黑色主题

Android Q 将有黑色模式;你可以手动启动,它也可以在省电模式下自动开启。

专注模式

需要完成一些工作吗?借助于专注模式,你可以列出最让自己分心的应用名单,然后只要按下开关,这些应用就会消失,直到你关掉专注模式。这一功能将在今年秋天登陆 Android 平台。

Pixel手机支持谷歌地图 AR 模式

几个月前,谷歌展示了一种新的增强现实模式——该公司一直在为谷歌地图开发这种产品,旨在确保用户从正确的方向出发。拿起你的手机,你会看到面前的世界以相机视图呈现在自己跟前。谷歌地图会将这张图片与其 “街景” 数据进行比较,以比仅使用 GPS 更好的方式来确定你的确切方位,然后画出给你指出正确方向的箭头。

这种模式已经测试了一段时间,应该会在今天晚些时候开始支持 Pixel 手机。

在以消费者为重点的主题演讲结束后不久,谷歌马上又发布了专注于开发者的主题演讲。演讲内容主要包括:

翻译:皓岳

Here’s everything Google announced today at the I/O 2019 Keynote

Read More

Sextech company scorned by CES scores $2M and an apology

Lora DiCarlo, a startup coupling robotics and sexual health, has $2 million to shove in the Consumer Electronics Show’s face.

The same day the company was set to announce their fundraise, The Consumer Technology Association, the event producer behind CES, decided to re-award the Bend, Oregon-based Lora DiCarlo with the innovation award it had revoked from the company ahead of this year’s big event.

“We appreciate this gesture from the CTA, who have taken an important step in the right direction to remove the stigma and embarrassment around female sexuality,” Lora DiCarlo founder and chief executive officer Lora Haddock (pictured) told TechCrunch. “We hope we can continue to be a catalyst for meaningful changes that makes CES and the consumer tech industry inclusive for all.”

In January, the CTA nullified the award it had granted the business, which is building a hands-free device that uses biomimicry and robotics to help people achieve a blended orgasm by simultaneously stimulating the G spot and the clitoris. Called Osé, the device uses micro-robotic technology to mimic the sensation of a human mouth, tongue and fingers in order to produce a blended orgasm for people with vaginas.

Lora DiCarlo’s debut product, Osé, set to release this fall. The company says the device is currently undergoing changes and may look different upon release.

“CTA did not handle this award properly,” CTA senior vice president of marketing and communications Jean Foster said in a statement released today. “This prompted some important conversations internally and with external advisors and we look forward to taking these learnings to continue to improve the show.”

Lora DiCarlo had applied for the CES Innovation Award back in September. In early October, the CTA notified the company of its award. Fast-forward to October 31, 2018 and CES Projects senior manager Brandon Moffett informed the company they had been disqualified. The press storm that followed only boosted Lora DiCarlo’s reputation, put Haddock at the top of the speakers’ circuit and proved, once again, that sexuality is still taboo at CES and that the gadget show has failed to adapt to the times.

In its original letter to Lora DiCarlo, obtained by TechCrunch, the CTA called the startup’s product “immoral, obscene, indecent, profane or not in keeping with the CTA’s image” and that it did “not fit into any of [its] existing product categories and should not have been accepted” to the awards program. CTA later apologized for the mishap before ultimately re-awarding the prize.

At the request of the CTA, Haddock and her team have been working with the organization to create a more inclusive show and better incorporate both sextech companies and women’s health businesses.

“We were a catalyst to a huge, resounding amount of support from a very large community of people who have been quietly thinking this is something that needs to happen,” Haddock told TechCrunch. “For us, it was all about timing.”

Lora DiCarlo plans to use its infusion of funding, provided by new and existing investors led by the Oregon Opportunity Zone Limited Partnership, to hire ahead of the release of its first product. Pre-orders for the Osé, which will retail for $290, will open this summer with an expected official release this fall.

Haddock said four other devices are in the pipeline, one specifically for clitoral stimulation, another for clitoral and vaginal stimulation, one for anywhere on the body and the other, she said, is a different approach to the way people with vulvas masturbate.

“We are aiming for that hands-free, human experience,” Haddock said. “We wanted to make something really interesting and very different and beautiful.”

Next year, Haddock says they plan to integrate their products with virtual reality, a step that will require a larger boost of capital.

Haddock and her employees don’t plan to quiet down any time soon. With their newfound fame, the team will continue supporting the expanding sextech industry and gender equity within tech generally.

“We’ve realized our social mission is so important,” Haddock said. “Gender equality, at its source, is about sex. We absolutely demonize sex and sexuality … When you talk about removing sexual stigmas, you are also talking about removing gender stigmas and creating gender equity.”

Read More

Brains Speed Up Perception by Guessing What’s Next | Quanta Magazine

neuroscience

Your expectations shape and quicken your perceptions. A new model that explains the effect suggests it’s time to update theories about sensory processing.

Imagine picking up a glass of what you think is apple juice, only to take a sip and discover that it’s actually ginger ale. Even though you usually love the soda, this time it tastes terrible. That’s because context and internal states, including expectation, influence how all animals perceive and process sensory information, explained Alfredo Fontanini, a neurobiologist at Stony Brook University in New York. In this case, anticipating the wrong stimulus leads to a surprise, and a negative response.

But this influence isn’t limited to the quality of the perception. Among other effects, priming sensory systems to expect an input, good or bad, can also accelerate how quickly the animal then detects, identifies and reacts to it.

Years ago, Fontanini and his team found direct neural evidence of this speedup effect in the gustatory cortex, the part of the brain responsible for taste perception. Since then, they have been trying to pin down the structure of the cortical circuitry that made their results possible. Now they have. Last month, they published their findings in Nature Neuroscience: a model of a network with a specific kind of architecture that not only provides new insights into how expectation works, but also delves into broader questions about how scientists should think about perception more generally. Moreover, it falls in step with a theory of decision making that suggests the brain really does leap to conclusions, rather than building up to them.

Faster Senses and Active States

Taste, the least studied of the senses, was the perfect place to start. After a taste hits the tongue, a few hundred milliseconds pass before activity in the gustatory cortex starts reflecting the input. “In brain terms, that’s like forever,” said Don Katz, a neuroscientist at Brandeis University in Massachusetts (in whose lab Fontanini did his postdoctoral work). “In the visual cortex, it takes a fraction of that time,” making it much more difficult to discern the expectation effect that these researchers wanted to study.

In 2012, Fontanini and his colleagues performed an experiment in which rats heard a sound (an “anticipatory cue”) and then received a tiny burst of flavor through a tube in their mouth. The taste itself could be sweet, salty, sour or bitter, and the anticipatory cue contained no information about which of the four it might be.

Even so, the researchers found that such general expectations could drive the neurons in the gustatory cortex to recognize the stimulus nearly twice as fast as when the rats received the taste without hearing the sound first. The period of latency dropped from roughly 200 milliseconds to only about 120 milliseconds.

Fontanini wanted to know what kind of neural network could theoretically enable this more rapid coding. And so he brought someone from outside the taste field into the fold: fellow Stony Brook neurobiologist Giancarlo La Camera, who had previously worked on modeling the spontaneous brain activity that occurs even in the absence of a stimulus.

The past few decades have increasingly highlighted that much of the activity in sensory networks is intrinsically generated, rather than driven by external stimuli. Compare the activity in the visual cortex of an animal in complete darkness with that of an animal looking around, and it’s difficult to tell the two apart. Even in the absence of light, sets of neurons in the cortex begin to fire together, either at the same time or in predictable waves. This correlated firing persists as a so-called metastable state for anywhere from a few hundred milliseconds to a few seconds, and then the firing pattern shifts to another configuration. The metastability, or tendency to hop between transient states, continues after a stimulus is introduced, but some states tend to arise more often for a particular stimulus and are therefore thought of as “coding states.”

La Camera and others (including Katz) had previously modeled metastability by building what’s called a clustered network. In it, groups of excitatory neurons had strong interconnections, but inhibitory neurons were also randomly connected to the excitatory ones, which added a broad damping effect to the system. “This clustered architecture is fundamental for producing metastability,” Fontanini said.

Fontanini, La Camera and their postdoctoral fellow Luca Mazzucato (now at the University of Oregon) found that the same network structure was fundamental for recreating the effects of expectation, too. In a metastable model with a clustered architecture, the researchers simulated a general anticipatory cue followed by the arrival of a particular taste stimulus. When they did this, they successfully reproduced the pattern of accelerated coding that Fontanini had observed in rats in 2012: The transitions from one metastable state to the next got faster, which also made it possible for the system to reach coding states faster. The results demonstrated that simply by building a network to show these metastable patterns of activity, “you can also capture a lot of the neurological responses … when you simulate a gustatory input,” Fontanini said.

When the researchers tried modeling the anticipatory cue and stimulus in a network without clusters, they couldn’t generate the 2012 results. And so “only certain types of networks allow this [effect] to happen,” Katz said.

A Less Strenuous Hike

The finding was notable, first, for providing insights into what kind of architecture to search for in the actual gustatory cortex — and perhaps in other sensory cortices as well. Currently, neuroscientists are debating how taste gets processed: Some argue that certain neurons might encode “sweet” and others “salty,” creating very specific neural signatures for specific tastes. Others tie it to broader patterns of activity; most neurons respond to most tastes, and a given neural signature is more roughly correlated with one taste over another. The work done by Fontanini and his colleagues supports the latter theory while providing predictions about what that connectivity should look like. The clusters alone “capture many, many features of the gustatory cortex,” Fontanini said: “the spontaneous activity, the patterns of response to taste, the expectation effect.” He hopes to continue digging into how those clusters form, and what other kinds of neural activity they affect.

The work also paints a picture of the neural substrate underlying expectation in the brain. It’s not just that an anticipatory cue excites particular neurons, or induces a particular set of states, which then encode the stimulus. Instead, it’s more significant that expectation seemed to modify the dynamics — namely, the switching speed — of the entire system.

Fontanini and La Camera liken these dynamics to a ball moving through a landscape filled with troughs. Those pockets or valleys represent response states, and anticipation tips the landscape so that the ball falls into the first trough faster. It also smooths out the hilly path the ball needs to traverse between troughs, making it easier to pass from one state to the next without getting stuck.

That is, expectation makes the network a little less sticky. It allows for an easier hike toward the states that encode an actual taste, but it does not confer so much stability that the system gets stuck in a single state. That’s a problem that often plagues these kinds of clustered networks: With such clustering, some “trough” states end up being too deep, and the system amplifies the wrong information. But these findings show that “you don’t need an elaborate system” in place to resolve that, said Georg Keller, a neuroscientist who studies visual processing at the Friedrich Miescher Institute for Biomedical Research in Switzerland.

Fontanini and La Camera hope this kind of mechanism might also explain the effects of other context-setting processes beyond expectation, like attention and learning. But perhaps the “most important implication [of our work] is that it shifts the focus from the static firing responses of neurons coding for things, to dynamical behaviors of neurons,” La Camera said.

While a dynamical systems approach to neuroscience is hardly new, it’s been difficult to test and model. The way experts think about basic sensory perception tends toward the hierarchical: The cortex builds up and integrates features to form perceptions, sending signals to other layers of the network that integrate still more information until the brain ultimately arrives at a decision or behavior.

Not so in this new work. Instead, the team’s results support a different kind of processing in which “all of this happens at the same time, and … before the stimulus even arrives,” said Leslie Kay, a neuroscientist at the University of Chicago who focuses on olfaction. “You learn stuff within a cortical area,” forming a system of connected clusters to reflect that learning, “and then you influence it [with expectation], and what it knows emerges.”

A Sudden Tumble

The model implies that decision making isn’t a gradual process driven by the buildup of information at all, but rather a sort of “aha” moment, a jump in neural fluctuations. In fact, Katz has used the same kind of modeling as Fontanini and La Camera to support the idea that arriving at a decision (say, to swallow or spit out a piece of food) “happens in a sudden tumble,” he said.

The connection between these “very different corners of the taste field” — Fontanini’s work on sensory perception and his own research on later processing — leaves Katz feeling “super excited.”

It also highlights the need to move away from focusing on single neurons that respond to particular cues, and toward making internal states and dynamics more explicit in our understanding of sensory networks — even for the most basic sensory stimuli. “It’s much easier to say that a neuron increases its firing rate,” said Anan Moran, a neurobiologist at Tel Aviv University in Israel. But to understand how organisms work, “you cannot account only for the stimulus, but also for the internal state,” he added. “And this means that our previous [understanding of] the mechanism used by the brain to achieve perception and action and so on needs to be reevaluated.”

“The stuff going on in the gustatory cortex before the stimulus arrives is a large part of how that stimulus gets processed when it gets there,” Katz said. And in this case, examining how those internal states get modified by an experience or cue revealed something about the overall network connectivity.

Now, Moran said, this kind of context dependence needs to find its way into other studies of perception and cognition. “The last frontier is the visual system… This [kind of work] might tell us something interesting about how visual information is processed.”

“We still don’t have any good, single model that really encapsulates all this activity,” he added. But this is “a good starting point.”

Read More

New species of “fierce” tiny dinosaurs with bat-like wings is discovered in China

Oil prices hit 2019 high on supply crunch fears as Libya erupts into civil war
neuroscienceYour expectations shape and quicken your perceptions. A new model that explains the effect suggests it’s time to update theories about sensory processing.Imagine picking up a glass of what you think is apple juice, only to take a sip and discover that it’s actually ginger ale. Even though you usually love the soda, this time…

A “fierce” new species of little dinosaurs with bat-like wings has been discovered in China. 

Palaeontologists have uncovered the fossilised remains of a 163-million-year-old creature that would have been around the size of a magpie, weighing just 300g. 

Named Ambopteryx longibrachium, it had bat-like membrane wings which were previously unknown among predatory theropod dinosaurs. This suggests that when dinosaurs were beginning to fly they were experimenting with a range of wing structures. 


We’ll tell you what’s true. You can form your own view.

From
15p
€0.18
$0.18
USD 0.27
a day, more exclusives, analysis and extras.

The find “completely changes our idea of dinosaur evolution“, lead researcher Min Wang from the Chinese Academy of Sciences told The Independent

“We imagine dinosaurs have feathered wings but this latest discovery changes how we understand the origins of flight,” he said. 

Pictured is a life reconstruction (left) and 3-D reconstruction (right) of Ambopteryx longibrachium ( Chung-Tat Cheung/ Min Wang )

The feathered dinosaur lived during the the Upper Jurassic period in what is now Liaoning province in north eastern China.

It would have spent most of its time in the trees, or flying between them, according to the paper published in the journal Nature.

It lived around the same time and place as dinosaurs with feathered wings.

However, feathered wings were ultimately more successful and led to the evolution of birds – bats did not evolve until after the extinction of dinosaurs 66 million years ago. 

Fossilised stomach contents showed the creature had undigested bone material suggested it hunted other animals.

“It was probably quite fierce,” said Dr Wang. 

The new specimen belongs to a group called the scansoriopterygids  which are tree climbers with very long hands and fingers.

Pictured: a). Fossil; b). reconstruction of bones; c). close-up of the membranous wing; d). image of the bony stomach content (Min Wang )

Ambopteryx was related to a similar dinosaur named Yi Qi which was found by a farmer in China in 2007. Yi Qi was the first specimen to be found with bat-like wings. 

“At least eight to ten species of dinosaurs at the time had feathered wings, but only two had membrane wings,” said Dr Wang. “The fossil record is not complete and the feathered wing is more widely distributed so I think they probably evolved earlier.” 

Only the best news in your inbox

Only the best news in your inbox

Read More