article_text
stringlengths 294
32.8k
⌀ | topic
stringlengths 3
42
|
---|---|
On paper, iOS 16 is the version that comes after iOS 15. But in reality, iOS 16 finishes the work that started with iOS 14. This year’s update to Apple’s mobile operating system is all about making your phone more personal.
With iOS 14, Apple revamped widgets. It led to a new wave of apps that help you completely transform the home screen. People started uploading screenshots to social networks and it became a viral feature. The success of these new highly customizable widgets have surpassed all expectations.
With iOS 15, Apple helped you customize your phone to your needs with different Focuses. You could turn on the “Sleep” Focus at night to disable pages, turn off some notifications and more. You could turn on the “Work” focus to change your home screen and show your most important work information — a calendar widget, app icons for your work apps, etc.
With iOS 16, Apple redesigns the lock screen and ties up loose ends when it comes to device personalization. The lock screen is an important part of iOS. You likely tap on your iPhone display dozens of times per day to check the time or look at your most recent notifications.
And yet, you probably haven’t spent too much time thinking about it over the past few years. That’s because it’s been quite static and hasn’t changed much from a high-level perspective. There’s the current date, the current time and a long list of notification bubbles.
An extension of your personality
There’s a reason why your smartphone feels like your most personal device. It’s not your smartwatch even though it’s strapped to your wrist. It’s not your pair of AirPods even though you wouldn’t share it with anyone else because it would be gross. It’s the smartphone.
Over the past fifteen years, chances are many important life events occurred on your smartphone. You made new friends in various conversation threads, you broke up with a text message and met new people in dating apps, you accepted job offers and learned of the passing of a family member. Your phone makes you laugh and makes you cry.
But what makes your iPhone truly yours? Sure, you can put a case that stands out and a wallpaper that reminds you of your happy place. But that’s not enough.
With iOS 16, Apple has completely revamped the lock screen. It starts with new features for the wallpaper. When you pick a photo in your photo library, you can now enable a depth effect to make a face, an animal or an object stand out from the background. It can slightly overlap with the current time, which makes it look like a magazine-style masthead.
Apple lets you apply on effect on the photo by swiping left and right. By default, you get an unedited version of the photo (“natural”), but you can put an emphasis on the subject in the foreground with the “studio”, “black & white” or “color backdrop” effects. If you’re more into minimalistic wallpapers, you can apply a “duotone” or “color wash” effect to turn a photo into a monochrome wallpaper.
Image Credits: TechCrunch
Combined with Apple’s smart suggestions of people and landscapes, you can get a cool wallpaper in just a few taps and swipes. And if you want to get a little surprise every time you tap on your phone, you can enable photo shuffle to refresh your wallpaper daily, hourly, on lock or on tap.
When you’re done configuring your lock screen, you can save it and associate the same wallpaper with your home screen as a pair. By default, Apple blurs photo wallpapers so that app icons remain easy to find. But you can customize the home screen in case you want something more specific.
If photos aren’t your thing, iOS 16 has a sort of built-in wallpaper generator. You can create color gradients and emoji grids featuring your favorite emojis.
There are also two cool dynamic weather-based or astronomy-based wallpapers. The astronomy one in particular has a particularly cool effect. You can get a zoomed out view of planet Earth (which reminds me of the very first iPhone lock screen), and it zooms in on your current location when you unlock the phone.
Image Credits: TechCrunch
Add widgets and focuses
Sure, the lock screen looks better. But it also works better. In addition to the new wallpaper engine, you can add tiny informative widgets. The idea is that you should be able to glance at your phone to get data points without having to unlock your phone.
By default, Apple lets you see weather data, the current time in other time zones, your Activity ring progress, battery information, your next alarm or a list of your next reminders, calendar events, etc.
But it’s going to be interesting to see how third-party app developers embrace the feature. We’ll probably see widgets that track the progress of a package, display upcoming birthdays or alert you in case of incidents on your subway line.
With iOS 16, you can save multiple lock screens and build your own lock screen library. In some situations, you may want to see different widgets. For instance, if you’re at work, you might switch to a less personal wallpaper with work-focused widgets.
Image Credits: TechCrunch
And yes, you guessed it, Apple has paired these new lock screen features with Focuses. With iOS 15, Focuses were somewhat limited. You could enable or disable some home screen pages with a specific Focus but that was it.
Now, you can tie a specific Focus mode with a lock screen and home screen. Whenever you switch from one mode to another, your entire phone adapts to this new environment. Essentially, you can have a day phone and a night phone without having to carry two different phones.
Let me give you a practical example. I have configured a “Sleep” Focus that turns off all notifications. When I turn it on, it now also changes my wallpaper so that it becomes completely black. There is only one widget on this specific lock screen — my alarm. In the morning, when I turn off this Focus, my wallpaper changes, and my notifications start showing up again.
I could go on and on about these new customization features. At first, I thought it was gimmicky and that I would only change my lock screen once and forget about it. But once I saw the full picture, I could see the potential for a phone that adapts to your needs throughout the day.
Image Credits: TechCrunch
All the rest
The new lock screen is clearly the most remarkable feature of iOS 16. But many different parts of the operating systems have also been improved in one way or another.
For instance, there are new dynamic notifications called Live Activities. While they will be even more useful with the Dynamic Island of the iPhone 14 Pro, this new notification type opens up some new possibilities. For instance, Uber could use it to let users track the progress of their driver from the lock screen.
Even social apps like BeReal could use it. When users receive the notification that says “Time to BeReal”, they are supposed to take a photo within two minutes. BeReal could display a countdown timer in its notification directly.
Live Activities will be coming later this year, just like another feature that I’m really excited about — shared photo libraries. In addition to your personal photo library, you can now create or join a shared library with your family and closed ones. This way, you no longer have to send photos to each other as anybody in the shared library can view the original, full-resolution photos.
When you first set it up, you can choose to add some photos based on a start date or the people in the photos, or all your past photos. You can also choose to manually add past photos to the shared library later.
From the camera interface, you can then choose if photos you are taking should be saved to your personal photo library or your shared photo library. But we’ll have to wait a few more weeks to start using it.
In other small but meaningful changes: In Messages, you can now edit a message sent via iMessage, or even unsend it altogether. You can also mark a conversation as unread.
You can enable the battery percentage icon again in the settings.
If you’re into long-distance watch parties, you can start a SharePlay session and chat at the same time in Messages.
Mail has been modernized — better search, scheduled messages and the ability to snooze conversations.
You can select an object, a person or an animal in a photo and lift it from the background. It can be useful if you like to sell clothes online.
You can now select text in videos — not just photos. And iOS will help you convert currencies or units in case that’s useful.
You can now dictate and type text more fluidly. For instance, you can dictate some text, pause, highlight some text, dictate some more text, etc.
Apple Maps now supports multi-stop routing.
Later this year, you’ll be able to split the cost of Apple Pay purchases using Apple Pay Later. This will only work in the U.S.
The Home app has been redesigned. You no longer have to swipe between rooms as everything is neatly displayed on the same page.
Your iPhone now has a move goal in the Fitness app — no Apple watch required.
There’s a new lockdown mode in iOS 16 in case you need an extreme level of protection against spyware.
The Weather app has been improved with hourly forecasts for the next 10 days. Just tap the forecast module to expand. And the list goes on and on. It’s always nice when you get so many quality-of-life updates. But it’s clear that the updates to the lock screen are the reason why people will be upgrading to iOS 16. And on that front, Apple has done a good job. | Operating Systems |
In the future OSes may be smaller, written specifically for customized chips and systems, with much less overhead.
The push toward disaggregation and customization in hardware is starting to be mirrored on the software side, where operating systems are becoming smaller and more targeted, supplemented with additional software that can be optimized for different functions.
There are two main causes for this shift. The first is rising demand for highly optimized and increasingly heterogeneous designs, which are significantly more efficient when coupled with smaller, more targeted OSes. The second is the rollout of AI/ML just about everywhere, which doesn’t benefit from monolithic operating systems that are designed to allocate resources and manage permissions, either for a single device or across a distributed network. In both cases, connecting other software to a single OS through application programming interfaces adds power and performance overhead.
“ML inference code can run on a variety of processing elements, each with different power and performance profiles,” said Steve Roddy, chief marketing officer at Quadric. “You might have a situation where in one state of operation of a mobile phone the OS might decide to run ML Workload A on the dedicated ML processor — whether that be the GPNPU, NPU, or offload accelerator — but in another scenario it might choose to run that same workload on a different resource.”
In some designs, there is no OS at all. Instead, frameworks like PyTorch and TensorFlow may have enough flexibility to handle resource allocation themselves, which is more efficient but more design-intensive. Yet that approach also provides the flexibility of migrating software independent of the hardware, which has significant power and performance benefits.
“In the industry it’s hard to understate the importance of ease of adoption,” observed Steven Woo, fellow and distinguished inventor at Rambus. “Nobody’s going to buy a more efficient car if the price is having to sit backwards. You need to think about fitting within existing business practices and supply chain behavior. Your ideal is to be essentially plug-and-play within each of those environments. When you upgrade to a new computer, you’d like a program to run without having to recompile it.”
With AI/ML, the key components are a computationally resource-intensive training phase and a generally less intensive inference phase. With inferencing, the workloads are specified in high-level graph code and are not necessarily hardwired to a specific type of processor. That frees up the OS to handle other functions. But it also provides the option of sizing the OS to the workload.
Consider a shopping recommendation application, for example, which suggests different chairs based on a user’s queries. The recommendation engine might run best and at lowest power on a dedicated ML processor. But when the user clicks an icon to show a recommended chair superimposed in the user’s living room, via the phone camera, the compute resources need to be adjusted to the task. The camera overlay of the virtual chair becomes the highest priority task and needs to run on a bigger ML processor, or multiple smaller ML processors working in parallel, while the recommendation engine might need to be switched over one or more CPUs to boost the performance.
As those types of workload-specific demands grow, it’s not clear whether monolithic operating systems can adapt to this level of specialization. So while SoCs are being disaggregated into heterogeneous and often customized components, such as chiplets, the same trend is underway on the software side.
“Major processor companies are increasingly deploying specialized instruction set processors in markets that previously relied solely upon scaling the speed and number of universal processors,” Roddy said. “Both Intel (‘accelerated computing’) and AMD (‘APU’) have introduced server and laptop processors with heterogenous instruction sets, much like the mobile phone processor vendors have utilized CPU + GPU + Vision processor (VPU) + ISP + Audio DSP architectures for more than a decade. Now, the ‘traffic cop’ OS at the heart of these latest beefy SoCs has to map a variety of specialized workloads across three or four or more processor targets.”
The more efficient approach is to have multiple smaller OSes, or to leverage that functionality elsewhere. Whether this happens, or whether OSes are retrofitted to handle additional tasks and slim down resources for others remains to be seen. But either is a significant change.
“That complexity feels like an extension of what existing operating systems do today, not a wholesale replacement,” Roddy said. “We don’t see any reason why today’s leading OS code bases will be eclipsed any time soon as long as they evolve. They don’t need to be ‘replaced’ in any sense, but they will need to get better at juggling tasks across heterogeneous resources.”
Rethinking OSes with AI
AI itself is one possible answer to faster allocation. Currently, one of the ways operating systems allocate resources is to guess the user’s next move, through a prediction mechanism like pre-fetch in search, in which the OS determines what to bring up next based on prior use patterns. The problem is that the current heuristics for anticipating next moves are written by humans, who must try to imagine and code for all possible scenarios. This is an argument for why OSes should better accommodate the functions of AIs — and for how AI itself can help.
“If you’re an AI person, you see the results of these heuristics and say to yourself, ‘This is a machine trying to predict what’s happening in the future,'” said Martin Snelgrove, CTO at Untether AI. “That’s what AI does for a living. Next-gen operating systems will throw out all of these hand-made heuristics and replace them with neural nets. Your machine should be looking at your pattern of usage. For example, it should discover that you always bring up a Word document right after you’ve terminated a Zoom call. It should then be using any available space to get Word mostly pre-loaded about 20 minutes into a Zoom call, because it knows your calls usually last about that long.”
Neural nets further the argument for smaller OSes. “Your operating system doesn’t need to be a gigabyte anymore, because all you’re doing is expressing the structure of the neural net, which isn’t a lot of code,” Snelgrove said. “The metaphor in Unix is that everything is a file. In AI, the key metaphor is that everything is an actor in the sense that the net receives inputs, thinks about them, and produces outputs. In Unix you can pipe things in, there can be filters, there can be inputs and outputs. In an AI system, a file can be an actor that just receives and sends the same thing. But most things will be active. So if you have a network whose job it is to paint all the roses pink and all the gardens green, that’s just an actor, and you give it images and it gives you images back.”
Looking ahead
The pragmatic thinking about increasing extensibility, rather than creating a completely new operating systems, permeates IBM’s approach. In 1974, the company introduced the Multiple Virtual Storage (MVS) OS. In the following decades, it has gone through several updates, culminating in the current 64-bit z/OS.
But in 2022, IBM shifted direction somewhat. It embedded its Telum inference accelerator on the same silicon as its processor cores, sharing memory and level 3 cache with the host, in order to reduce latency and allow for more real-time AI within z/OS applications.
Enterprise applications, it turns out, are where AI dreams go to die. The training sets may be vast, the models may be accurate, but what high-volume transactional workload customers need most is real-time inference.
“We had discussions, conducted surveys, and did a lot of research to understand the main challenges that clients faced in actually leveraging AI in mission-critical enterprise workloads,” said Elpida Tzortzatos, fellow and CTO for AI on IBM zSystems. “What we found was they couldn’t get the response times and throughput they needed.”
For example, financial institutions that wished to do fraud detection in real-time often could only spot-check because of the drag on throughput. [1] For AI to fulfill its promises, a fraud prediction needs to come back before the transaction completes. Otherwise, the system hasn’t done enough to protect the institution’s end customers, Tzortatos said.
In addition, IBM’s customers wanted to be able to easily consume AI without slowing down their applications or transactional workloads. “We heard, ‘I want to be able to embed AI into my existing applications without having to re-architect my systems or re-factor these applications. At the same time, I need to be able to meet stringent SLAs of one millisecond response times,‘” she noted.
Fig. 1 The AI ecosystem, as seen by IBM. Source: IBM
All of this experience led Tzortatos to recognize the industry needs to continue to evolve and optimize operating systems for AI and inference and training. “I don’t think it will be a completely new operating system,” she said. “Operating systems are going to evolve. They’re going to become more composable, where you can plug in those newer technologies and not impact the applications that are running on those operating systems.”
For now, rather than creating a new OS, it seems that commercial AI/ML will continue to rely on frameworks, as well as the ONNX exchange format, which allows developers to work in nearly any framework. The resulting code will be compatible with most inference engines. And if all goes as it should, the results will run on the current installed base of enterprise-scale OSes, like Linux, Unix, and z/OS.
At the same time, specialized AI/ML hardware may completely eliminate the need for an OS. “Already, part of the work of an OS is being done in accelerators. A data center GPU has its own scheduler and manages the work that needs to be done,” said Roel Wuyts, manager of the ExaScience Life Lab at imec.
When working with accelerators, an OS is a moot point, Cerebras CEO Andrew Feldman said. “By the time you get to the accelerator, you want to bypass the OS. It doesn’t help you with accuracy. The OS is designed to allocate hardware resources, and our compiler does that instead. We want the user writing in a language that the ML world is familiar with, and we don’t want them ever thinking about the challenge of distributing that work across 850,000 programmable elements. As a result, they never think of the machine at all. They just write their TensorFlow or PyTorch. The compiler puts it in and we don’t have to worry about any latency or a reduction in speed brought on by an OS allocating hardware resources.”
And at the other end of the scale, for single, embedded devices, such as smart cameras, the status quo is fine. “Do we need a new OS for machine learning AI at the lower level? No, that is regular compute work,” said Wuyts. “We can describe it and we can schedule it, and we already do that quite well.”
However, when you go beyond a single device into networked, distributed systems, a fresh approach could be quite valuable. Wuyts proposed using the data from a smart camera with another application, say a face-detecting security system in a train station, which needs to have the capability to zoom in and search a database of known terrorists. That scenario makes a developer’s life extremely difficult because of the interplay of different applications, devices, and networked demands.
“For that situation, you could see a new kind of OS that is distributed. We already looked at that a bit in the past. If you have a neural network or machine learning network, you would actually try to run some parts on your smaller devices and some parts more in the back end,” Wuyts said.
A developer should be spared having to think about partitioning and communication between disparate components, he said. “Instead, that can be done by this operating system, in the same way that I currently write a Windows application or Unix application, I have a scheduler that can take care of most [basic partitioning], but lets me take control if I want to do something fancy. For those new types of applications that are now becoming used a lot, it make sense to run them as distributed machine learning applications.”
Quadric’s Roddy noted there is a significant distinction between AI/ML workloads that the OS manages versus how much AI/ML will be included in the OS itself. “Will the Linux kernel include an ML inference graph that determines task priority and resource allocations rather than relying on deterministic or heuristic code to manage the system? The external workloads to be managed (i.e. a camera function in a smartphone app) will be large, heavyweight ML graphs that run best on a GPU or GPNPU. But even if the Linux kernel task manager adds an ML inference component, it won’t be a monstrous multi-TOP/s network. Rather, it would be something far more lightweight that can run on modern applications CPUs running the rest of the OS kernel.”
Roddy noted that machine learning potentially could enhance the usefulness of power management in devices, such as learning the behavior of each user, for instance, to tune the power/performance profile of business executive’s cell phone differently than a high school student’s. “But that hypothetical is an enhancement of an existing OS function, not a radical re-write of the underlying functionality of an OS.”
Conclusion
One potential scheme several sources discussed is that in order to meet energy needs, training might be done, as it is now, on classical OSes, while inference might be performed with a smaller “OS” that’s essentially just a job scheduler. This also presumes a world in which, in order to save energy output, there will be less reliance on compute in the cloud, and more directly on edge devices.
On the other hand, said Steven Latre of imec, “I would assume and predict that in the future, training and inference is going to be much more of a natural combination, and not that separate as we have right now, where we train it once and we deploy it somewhere else.” That said, he still sees a future in which AI will naturally bifurcate.
“There are two complementary directions in which AI is evolving. Large-scale compute AI, which can be linked with other types of scientific computing, HPC types of workloads. With those very large models, the issue at a hardware level is bottlenecks that might arise in communication and compute. So the main challenge is to do the scheduling in the right way to alleviate all the different bottlenecks,” Latre said. “In the more edge situations, it’s a completely different approach, where the concerns are less bottleneck-oriented, but more energy constrained with more latency constraints because they’re in more real-time situations. So that probably also warrants not just a single OS, but different types of OSs for different situations.”
Discussing the possibilities of neural networks, Snelgrove offered this food for thought: “If you look up von Neumann’s original paper [2], it starts with pictures of neurons. If we’re at ENIAC now, it’s the 1950s. We shouldn’t think what it’s going to look like in the 2020s, we should just try to get to 1970. From there, we can discuss 1990.”
References
1. Sechuga, G. Preventing Fraud with AI and IBM Telum Processor: The Value of Investing in IBM z16
IBM Blog, 2022
https://community.ibm.com/community/user/ibmz-and-linuxone/blogs/gregory-sechuga/2022/07/05/preventing-fraud-with-ai-and-ibm-telum-processor
2. von Neumann, J. First Draft of a Report on the EDVAC, Contract between the US Army Ordnance Department and the University of Pennsylvania. 6/30/1955.
https://web.archive.org/web/20130314123032/http://qss.stanford.edu/~godfrey/vonNeumann/vnedvac.pdf | Operating Systems |
Today, chances are you're working on a Windows-powered PC. Tomorrow may be another story. I've been watching how Microsoft plans to move you from PC Windows to a cloud-based Desktop-as-a-Service (DaaS) model for years. Yet more proof has recently surfaced about Microsoft's master cloud desktop plan.
Zac Bowden, a senior editor at Windows Central, spotted an internal Microsoft document that was uncovered in the stalled-out Microsoft acquisition of Activision. In a June 2022 Microsoft internal presentation, the company said it plans to "Move Windows 11 increasingly to the cloud: Build on Windows 365 to enable a full Windows operating system streamed from the cloud to any device. Use the power of the cloud and client to enable improved AI-powered services and full roaming of people's digital experience."
Most of the focus here appears to be on a consumer version of Windows 365 Cloud PC. Today, there are two editions: Business and Enterprise. You can run either of them on a Windows PC, a Chromebook, a Linux PC, or even an iPad. They're designed to bring Windows to pretty much any platform. (I wouldn't try it on an iPhone or Android smartphone, but if you're a glutton for punishment, you can try it on those too.)
The Business version starts at $31 per user, per month. For that, you get a 4GB Azure VM with 128GB of storage, along with Microsoft 365 apps, Outlook, and OneDrive. Personally, that's too lightweight for me — the next step up, with 8GB of RAM for $41, sounds much more practical.
The Enterprise Edition comes at the same price points. Its only significant difference is for Windows administrators; Enterprise includes Intune, Microsoft's cloud-based endpoint management service.
On Windows 11 machines, you can access your Cloud PCs via the Windows 365 app. And within the next few months, Windows 365 Boot, now in beta, will enable you to log directly into your Windows 365 Cloud PC without booting your local Windows 11 PC.
The point of this is so multiple users can use the same PC to sign into their own personal, assigned, and secure Cloud PC. Its target audience is workers in nursing, salespeople, and call centers, who share business devices.
Pricing remains uncertain for this version, nor do we have a price yet for the family version. There's been speculation it might be $10 a month for a "family" account. If so, this would be a loss-leader price designed to get people to give Windows 365 a try.
This has been a winning strategy in the past. When I first used a PC in an office, I did it with my own KayPro II CP/M-80 PC. Bring Your Own Device (BYOD) was a successful strategy long before the phrase existed.
Microsoft has also just released a new business version of its Cloud PC: Windows 365 Frontline. This take on the Windows cloud is meant for retail, healthcare, hospitality, and other vertical industry employees. Microsoft claims there are two billion frontline workers, so the company has high hopes it will prove to be a gangbuster service.
Windows 365 Frontline starts at $42 for 3 users per month. This entry-level configuration comes with two virtual CPUs, 4GB of RAM, and 64GB of storage. For a frontline service worker, that should be more than enough.
Looking ahead, Microsoft expert of experts, Mary Jo Foley, Directions on Microsoft editor in chief, sees Windows 10 22H2 as the end of its road, with no more new features coming to Windows 11 in the second half of 2024. Both operating systems' end-of-support date is October 14, 2025.
Now we all know Microsoft will offer support well beyond that date. It would face enraged customers if it didn't. But what comes next? We don’t know for certain (not even Mary Jo).
Maybe there’ll be an AI-powered Windows 12 — whatever that means. Personally, I'm inclined to ignore the current AI hype and focus on what Microsoft has already been doing: Ratcheting up a cloud PC model approach. The old desktop's death is coming. Long live the cloud desktop. | Operating Systems |
Researchers say that nearly 336,000 devices exposed to the Internet remain vulnerable to a critical vulnerability in firewalls sold by Fortinet because admins have yet to install patches the company released three weeks ago.
CVE-2023-27997 is a remote code execution in Fortigate VPNs, which are included in the company’s firewalls. The vulnerability, which stems from a heap overflow bug, has a severity rating of 9.8 out of 10. Fortinet released updates silently patching the flaw on June 8 and disclosed it four days later in an advisory that said it may have been exploited in targeted attacks. That same day, the US Cybersecurity and Infrastructure Security Administration added it to its catalog of known exploited vulnerabilities and gave federal agencies until Tuesday to patch it.
Despite the severity and the availability of a patch, admins have been slow to fix it, researchers said.
Security firm Bishop Fox on Friday, citing data retrieved from queries of the Shodan search engine, said that of 489,337 affected devices exposed on the internet, 335,923 of them—or 69 percent—remained unpatched. Bishop Fox said that some of the vulnerable machines appeared to be running Fortigate software that hadn’t been updated since 2015.
“Wow—looks like there’s a handful of devices running 8-year-old FortiOS on the Internet,” Caleb Gross, director of capability development at Bishop Fox, wrote in Friday’s post. “I wouldn’t touch those with a 10-foot pole.”
Gross reported that Bishop Fox has developed an exploit to test customer devices.
The screen capture above shows the proof-of-concept exploit corrupting the heap, a protected area of computer memory that’s reserved for running applications. The corruption injects malicious code that connects to an attacker-controlled server, downloads the BusyBox utility for Unix-like operating systems, and opens an interactive shell that allows commands to be remotely issued by the vulnerable machine. The exploit requires only about one second to complete. The speed is an improvement over a PoC Lexfo released on June 13.In recent years, several Fortinet products have come under active exploitation. In February, hackers from multiple threat groups began exploiting a critical vulnerability in FortiNAC, a network access control solution that identifies and monitors devices connected to a network. One researcher said that the targeting of the vulnerability, tracked as CVE-2022-39952 led to the “massive installation of webshells” that gave hackers remote access to compromised systems. Last December, an unknown threat actor exploited a different critical vulnerability in the FortiOS SSL-VPN to infect government and government-related organizations with advanced custom-made malware. Fortinet quietly fixed the vulnerability in late November but didn’t disclose it until after the in-the-wild attacks began. The company has yet to explain why or say what its policy is for disclosing vulnerabilities in its products. And in 2021, a trio of vulnerabilities in Fortinet’s FortiOS VPN—two patched in 2019 and one a year later—were targeted by attackers attempting to access multiple government, commercial, and technology services.
So far, there are few details about the active exploits of CVE-2023-27997 that Fortinet said may be underway. Volt Typhoon, the tracking name for a Chinese-speaking threat group, has actively exploited CVE-2023-40684, a separate Fortigate vulnerability of similar high severity. Fortinet said in its June 12 disclosure that it would be in keeping with Volt Typhoon to pivot to exploiting CVE-2023-27997, which Fortinet tracks under the internal designation FG-IR-23-097.
“At this time we are not linking FG-IR-23-097 to the Volt Typhoon campaign, however Fortinet expects all threat actors, including those behind the Volt Typhoon campaign, to continue to exploit unpatched vulnerabilities in widely used software and devices,” Fortinet said at the time. For this reason, Fortinet urges immediate and ongoing mitigation through an aggressive patching campaign.”
Listing image by Getty Images | Operating Systems |
PC gamers sticking with old versions of Windows may finally need to upgrade if they want to keep playing the games in their Steam libraries. Valve announced this week that it will stop supporting Steam on Windows 7 and Windows 8 on January 1, 2024. "After that date," the company's brief announcement reads, "the Steam Client will no longer run on those versions of Windows."
That timeline is still fairly generous to users of the 14- and 11-year-old operating systems, given that Microsoft ended all support for both in January 2023. GPU makers like Nvidia and AMD also stopped supporting their latest GPUs in Windows 7 or Windows 8 quite a while ago.
This move will affect a vanishingly small but persistent number of Steam users. According to Valve's own survey data for February of 2023, the 32- and 64-bit versions of Windows 7 and Windows 8.1 account for a little less than two percent of all Steam usage. This is almost nothing next to Windows 10 and Windows 11 (nearly 95 percent), but all macOS versions combined account for only 2.37 percent, and all Linux versions combined (including the Steam Deck) add up to just 1.27 percent.
The main culprit, according to Valve, is the built-in Chromium-based browser that Steam actually uses to render the Steam store and other bits of the UI. Chrome dropped support for Windows 7 and Windows 8 right around the same time that Microsoft ended support for the operating systems earlier this year. Versions that still work in Windows 7 and 8 will be susceptible to security bugs and, increasingly, rendering bugs and other functional problems as time goes on.
Upgrading to Windows 10 should still be relatively simple for Windows 7 and Windows 8 users, thanks to a free upgrade offer from Microsoft that never really ended, even though it was supposed to end in 2016. If your PC still runs Windows 7 or 8 reasonably well, it should do the same for Windows 10, and Windows 10 will continue to receive security updates from Microsoft until at least October of 2025. If you want to upgrade all the way to Windows 11, though, you'll probably need a hardware upgrade, or you can take your chances with an unsupported install—Windows 11's more restrictive hardware requirements will keep it from running on most PCs that can run Windows 7 or Windows 8.
Valve recommends upgrading to a newer version of Windows now rather than waiting for the January 2024 cutoff, primarily because these PCs are now unpatched and more vulnerable to malware. | Operating Systems |
WTF?! For the past few years, Microsoft has been accused of regularly violating user privacy. Compared to the classic NT-based systems, Windows 10 and especially Windows 11 are two completely different beasts in this regard.
How much data is a Windows operating system sending to online servers? According to a recent video from The PC Security Channel (TPCSC), the most up-to-date version of the Redmond OS is a real "talker" when it comes to telemetry and other data about users' preferences and online behavior.
Titled, "Has Windows become Spyware?," the video describes how live capture sessions can show online communication between Windows and external servers. The video creator used Wireshark, a well-known (and free) network protocol analyzer useful to see what is happening on a network "at a microscopic level."
Using Wireshark to check what a freshly-installed copy of Windows 11 was doing on a brand-new laptop, what they saw was eye-opening to say the least: just after the first boot, Windows 11 was quick to try and reach third-party servers with absolutely no prior user permission or intervention.
By using a Wireshark filter to analyze DNS traffic, TPCSC found that Windows 11 was connecting to many online services provided by Microsoft including MSN, the Bing search engine and Windows Update. Many third-party services were present as well, as Windows 11 had seemingly important things to say to the likes of Steam, McAfee, and Comscore ScorecardResearch.com, which is a market research effort that "studies and reports on Internet trends and behavior."
Many of the Windows 11 initial DNS queries where designed to provide "telemetry" data to market research companies, advertising providers and even geolocation-related domains like geo.prod.do with no permission or web browsing activity needed. The latest and greatest in the Windows line of operating systems is seemingly designed to "spy" anyone and everything from the get-go, TPCSC suggests.
As a comparison, or perhaps as a critical note about the current state of privacy in the Windows ecosystem, the YouTube channel tried the same packet-sniffing activity via Wireshark on Windows XP, which was first released in 2001.
According to their analysis, Windows XP doesn't even know what the word "telemetry" means: the first DNS traffic from the freshly-installed OS was to try and contact the Windows Update service, and that's all. No market research, no browsing tracking, nothing at all.
Some people are trying to justify Windows 11's behavior as the lesser evil in a technology world full of third-party services and online features that need to be fed data to work as intended. Answering to comments on the video, TPCSC is still warning the most knowledgeable and privacy-aware users that even when telemetry is turned off via third-party utilities, Windows 11 is still "sending things" online via the DNS protocol. | Operating Systems |
iOS 17 is Apple’s upcoming operating system for its iPhone models. After improvements on the Lock Screen, support for the Dynamic Island, and a new Freeform app with iOS 16, here’s what we know about the company’s next iOS.
What will Apple call the next iOS?
It’s always hard to predict what Apple will call its macOS operating system versions. With iOS, however, things are a bit more straightforward.
If Apple follows the trend, iOS 16’s successor will be called iOS 17. Internally, Apple calls this next operating system Dawn – but, of course, it doesn’t mean the Cupertino firm will name its iOS updates the way it does with macOS.
iOS 17 features
Unlike hardware releases, it’s difficult to know which software improvements Apple will bring to its new operating systems since it’s all in-house. In January, Bloomberg’s Mark Gurman said iOS 17 could have fewer features since it focuses on the company’s Mixed-Reality headset and the upcoming xrOS software.
Unreliable leaker LeaksApplePro says Apple won’t make groundbreaking changes on iOS 17 compared to iOS 16. They claim the company has some upgrades in store for the Music app relating to navigation within the app, but they didn’t share any additional information about the changes.
They said Apple would tweak the Mail, Reminders, Files, Fitness, Wallet, and Find My apps in iOS 17. Meanwhile, the Home app is expected to see significant changes, but once again, the leaker could not provide any noteworthy specifics.
Another common sense about iOS 17 is the fact that Apple will likely expand Lock Screen customization and improve Dynamic Island usage, two of the new features that came with the iOS 16 cycle.
Third-party app stores support
Interestingly enough, one of the features that should be available with iOS 17 is support for third-party app stores. With the European new Digital Markets Act becoming fully applicable by the next year, Apple is planning to make the necessary changes in time for the release of iOS 17.
The changes would only go into effect in Europe at first, as other countries — including the United States — have yet to pass similar laws forcing Apple’s hand.
Allowing third-party app stores would be one of many changes Apple would make in order to comply with the Digital Markets Act. Other changes include opening more of its APIs to third-party apps, removing the requirement for third-party web browsers to use WebKit, and potentially allowing users to install third-party payment systems.
iOS 17 release date
If Apple follows the trend, the company will unveil iOS 17 at the WWDC 2023. The conference hasn’t been announced yet, but it usually takes place in the first week of June. After that, iOS 17 will be available for developers to try out.
Around July, a public beta will be made available, with the official release date expected to be September, around iPhone 15 announcement.
Compatible iPhone models
With iOS 16, Apple was pretty radical about dropping support for old iPhones, meaning no more iPod touch could run the latest iOS version. In addition, only 2017 iPhone models or newer could update to iOS 16.
If this year isn’t full of new features, Apple might continue to support all iPhone models it currently offers iOS 16 to. Still, if the company adds several new functions to iOS 17, we might see the first iPhone with a notch, the iPhone X, losing support for Apple’s upcoming operating system.
Currently, these are the iPhones that support iOS 16:
- iPhone 8 and 8 Plus
- iPhone X
- iPhone XR, XS, and XS Max
- iPhone 11
- iPhone 11 Pro and 11 Pro Max
- iPhone SE (2nd gen)
- iPhone 12 mini and iPhone 12
- iPhone 12 Pro and iPhone 12 Pro Max
- iPhone 13 mini and iPhone 13
- iPhone 13 Pro and iPhone 13 Pro Max
- iPhone SE (3rd gen)
- iPhone 14 and iPhone 14 Plus
- iPhone 14 Pro and iPhone 14 Pro Max
iOS 17 concepts
More recently, graphic designer Parker Ortolani shared his thoughts on what he believed iOS 17 could add. His concept is based on three main features: More customization to the Lock Screen, improved Dynamic Island usage, and a new AI-powered Siri app. | Operating Systems |
Tomorrow’s the big day, and we’re expecting big things – well, one really big thing for sure. Apple will kick of WWDC 2023 at 10AM PT Monday June 5 with its customary keynote. As ever, the event will focus on the latest versions of the company’s operating systems, namely: iOS/iPadOS 17, macOS 14 and watchOS 10.
We’re also expecting so new additions to the MacBook line, potentially including a 15-inch Air. You can read our full rundown of the rumors over here.
But let’s be real. All eyes will be focused on the company’s (ridiculously) long-rumored Reality Pro AR/VR (MR, if you will) headset. After a reported seven to eight years of development, the company is finally ready to unveil the system – or a developer version, at least. Sink or swim, it’s going to be one of the most fascinating WWDCs in recent memory. | Operating Systems |
Android 14 is finally here, and that means Android users can finally download the latest rendition of Google’s operating system, starting with Pixel devices. But if you’re rocking a phone that supports Android 14 and you haven’t been keeping up with everything, you might be wondering why you should bother upgrading. Well, here are three great reasons you should upgrade your phone to Android 14.
Battery life improvements
Aside from the very obvious “security updates” that new Android operating systems bring, Android 14 is also set to provide Android users with a ton of battery life improvements. These aren’t major improvements, but Google has made some minor tweaks to the way that its operating system works, thus making it more efficient and less power-hungry.
Android 14 will also bring back the option to see how much screen time you’ve used since your last full charge, as it’s a really good way to keep up with how much battery life you’re using to doom scrolling on Twitter, TikTok, or other apps.
Custom lock screens
If you love Android for its customizability, then you’re going to love this next change in Android 14. That’s because Google is finally adding lock screen customization into the basic Android 14 settings. That means you won’t need to worry about purchasing an Android phone that offers its own special version of Android (like Samsung).
You’ll be able to alter the clock design, the colors on the screen, and even add shortcuts to useful features like QR scanning, your phone’s wallet, and the flashlight. This is more of a small feature, but it’s still an excellent reason to go ahead and upgrade to Android 14 if you want to customize your lock screen.
More file access control
This last reason might seem a bit mundane, but it’s actually one of the most important changes that Android 14 is bringing to the table. You get more control over how apps access your phone’s files. Android 13 added an option to give apps with all-or-nothing access to your phone’s photos and videos. Now, though, you’ll have a bit more control over what those apps can access.
This is a feature we’ve already seen utilized on iPhone and some other Android devices, but allowing you to decide which photos and videos your apps have access to is going to be key to keeping those important private photographs protected from malicious apps.
Of course, there are other reasons why upgrading to Android 14 is worthwhile. Google is also bringing satellite support to the operating system, which could help it provide better emergency support for people away from standard service areas. We aren’t quite sure exactly what this will look like, but it’s a worthwhile feature to get access to in any regard. | Operating Systems |
Apple just dropped a public beta of macOS 14 Sonoma. Perhaps you’ve been swimming in those wine country waters for a while now. After all, back at WWDC, the company announced that it was opening up early access to its new operating systems to anyone with a developer account — not just those shelling out the $99 for the Developer Program.
Of course, the regular bit of caution applies. I’ve been running betas on all my machines since last month’s developer conference and have encountered a few bugs here and there. Nothing major, but enough to suggest people hold off on installing on their day drivers until the final version has dropped.
As is the case with all Apple OS drops, there’s a lot of common DNA between the different platforms — that’s something that seems to be truer and truer with each subsequent release. There are a number of shared features between macOS 14 and iOS 17. We’ll be hitting on those below, but kicking things off with some of the topline desktop-only additions.
As ever, it’s a free update, so we largely recommend those with compatible systems upgrade when the time is right. Of course, if you’ve got some older apps and workflows, it’s never a bad idea to wait a few weeks to see if folks are throwing up any red flags on the usual forums. Any major operating system upgrade runs the very real risk of breaking things. Updating is easy — reverse is generally less so.
Here’s what’s compatible with Sonoma:
• iMac: 2019 and later
• Mac Pro: 2019 and later
• iMac Pro: 2017
• Mac Studio: 2022 and later
• MacBook Air: 2018 and later
• Mac mini: 2018 and later
• MacBook Pro: 2018 and later
The biggest update? In a word: Widgets. Specifically, widgets are no longer the sole domain of the sidebar. Now you can drag and drop them onto the desktop — a trick borrowed from earlier iterations of iOS. It’s a dynamic process, and once on the desktop, the widgets will sit beneath open windows, turning into a translucent pane when you’ve got an app open. Interestingly, if you attempt to drag them on top of files sitting on the desktop, they’ll push them out of the way.
If you hold an associated iPhone nearby, an option will populate beneath the sidebar that lets you pull that device’s widgets to the desktop, as well.
The last few macOS updates have, thankfully, had a much greater focus on video conferencing. Along with improved webcam hardware and features like Continuity Camera (which lets you use a connected iPhone as a webcam), it’s good to see the company focused on this aspect of work and life that hasn’t faded even as COVID restrictions have loosened.
Joining things like Center Stage are features like reactions, which bring Messages animations like confetti, hearts and fireworks. These can be activated with a mouse click or hand gesture. Those are compatible across various teleconferencing apps including Zoom, so you don’t have to operate exclusively inside of FaceTime to take advantage. It’s a smart move on Apple’s part, as people frankly don’t use FaceTime for business calls. A new “share on” feature is also available in windows for apps like Preview, allowing you to share directly to a teleconferencing app.
There’s also Presenter Overly, which turns your presentation into a background like a virtual whiteboard. Or you can turn your head into a bubble inset and let the presentation monopolize the screen.
Safari’s filing system gets another upgrade, with the addition of Profiles. You can split them into different places like “Home,” “Work” and “School” to make sure you don’t cross the streams bookmarks, tab groups, history and cookies. The browser also now lets you save “web apps” to the Dock. Visit a site in Safari, go to File > Add to Dock and it will save the site’s favicon below for easy access.
New for families is the ability to create a group of shareable passwords that will dynamically update. You can choose to remove people from the list whenever. Safari will also lock private browsing windows when you’re away, obscuring the content and making them password protected to protect you from unwanted snooping.
As per usual, Messages is getting some cross-OS updates. That includes the ability to add multiple search filters, swipe to replay and the ability to share your location via Apple Maps (if so inclined). Autocorrect is getting upgraded, too, so you can finally type “fuck” and really mean it.
The final release is due to arrive at some point later this year (signs point to a September/October time frame). Let’s ducking go. | Operating Systems |
Apple has released the first version of macOS Ventura. Here’s what you need to know about the new features, whether they will run on your Mac, and how Apple’s apps including Mail and Safari will be changing. The Ventura name maintains Apple’s recent tradition of giving every version of macOS a name in addition to a version number. As usual, the name is taken from a landmark or area in California as has been the tradition since Mavericks launched in 2013. Prior to that, large cats were used as names for Apple’s Mac operating systems. This time the version number will be 13 (unlucky for some, but that didn’t stop Apple from calling the 2021 iPhone the iPhone 13). Update 10/24: macOS 13 Ventura is now available for all users. macOS Ventura: Release date Apple unveiled the features coming to the next version of macOS during the WWDC 2022 keynote on June 6 at 10 a.m. PT. The final version arrived on Monday, October 24. As with previous releases, it became available for download at 10 a.m. PT. The name of the next version of macOS is Ventura. macOS Ventura: Latest beta version Now that the final version of macOS Ventura has arrived for all users, we expect the first macOS 13.1 beta will arrive for testers soon. macOS Ventura: Compatibility Apple has confirmed that the following Macs are supported by macOS Ventura: MacBook models from 2017 or laterMacBook Air models from 2018 or laterMacBook Pro models from 2017 or laterMac mini models from 2018 or lateriMac models from 2017 or lateriMac Pro (all models)Mac Pro models from 2019 or laterMac Studio (all models)This means the following Macs, which were previously supported by Monterey, have now fallen off the list: iMac (models from 2015)MacBook Air (models from 2015 and 2017 models)MacBook Pro (2015 and 2016 models)Mac mini (2014 models) Mac Pro (2013 model – cylinder/trash can) MacBook (2016 model)The 2014 Mac mini was sold until 2018, the ‘trash can’ Mac Pro until 2019, and the 2017 MacBook Air was sold until July 2019. We had thought that Apple wouldn’t remove those Macs from the supported list, since people might have purchased the model just such a short time ago. At least they will still be supported by macOS Monterey for at least two more macOS generations. See: This is how long Apple supports Macs. To find out if your Mac will support Ventura read: macOS 13 Ventura compatibility: Can your Mac run the latest version? Some of the Macs that are supported by Ventura may not support all the new functions. Read: New macOS features that will only work on the newest Macs. Wondering how Ventura compares to Monterey? Read macOS Ventura vs Monterey. macOS Ventura: New features Here’s an overview of what’s coming in macOS 13 Ventura. Stage Manager Continuity continues to evolve with the introduction of Stage Manager – a new way to manage your desktop clutter that reminds us a little bit of Spaces, because it allows you to organize working areas and hide them away, albeit at the side of your screen, rather than the top. Here’s how to use Stage Manager to organize your windows. Continuity Camera Another continuity-related feature allows you to use your iPhone as a webcam as well as Handoff a FaceTime call from your iPhone or iPad to your Mac. Continuity Camera is a great way to benefit from the superior camera on the iPhone. One really impressive feature is Desk View, which displays two views to the person you are calling – your face and your desk. Using the iPhone camera means that Mac users can benefit from features like Portrait mode and Centre Stage and the new Studio Light feature. Read about how to use your iPhone as a webcam for your Mac. Spotlight Apple’s method to search your Mac – Spotlight – also received a revamp. Quick Look allows you to preview files and you can search photos by location, objects, people, and more. Live Text improvements mean that you will be able to search text within images and videos. Users will even be able to create a new document, start a timer, or more, from within Spotlight. More information here: How Apple has improved Spotlight search in Ventura. Reminders In macOS Ventura, Apple has added several new features to make Reminders more helpful. You’ll be able to see your reminders grouped by time and date, you will be able to pin a list and save lists to be used as templates. Read about the new features in Reminders here: Reminders in macOS Ventura. System Settings System Settings is the new name for System Preferences. A name that iOS users will likely feel at home with. Read about how Apple has revamped System Preferences in macOS Ventura in our in-depth article. Background noises You can play soothing white noise on your Mac in Ventura, including the sound of rain, the ocean, or a stream. Read: How to play soothing white noise in macOS Ventura. Gaming Expect gaming on the Mac to truly take off (well Apple does anyway). Apple says that every new Mac will be able to run AAA games “with ease”. Improvements in Metal 3, MetalFX Upscaling, and Fast Resource Loading API should benefit game developers. AirPods In macOS Monterey and earlier, users didn’t get to control much of the AirPod’s settings, but that’s changing in Ventura. When Ventura launches users will finally get access to the full complement of AirPods settings, just like in iOS. Read more here: Full AirPods settings coming to your Mac. macOS Ventura: App updates Over the years at the same time as Apple has updated the Mac operating system it has also made changes to various apps that ship with the Mac, and we can expect more this year. Several new features are coming to Safari and Mail along with updates to Weather, the Clock, new accessibility tools (such as Live Captions). Mail Mail now has improved search, but probably the most anticipated feature will be the ability to cancel delivery of an email after clicking send (we imagine there is a time limit here) and also schedule sending an email. Both are features offered by third parties, but it’s good to see them coming to Apple’s email software. Read How to unsend and schedule e-mail in Apple Mail for more information. There is some confusion over the inclusion of the Hide My Mail feature, which should mean that it isn’t a requirement to share your email with third parties. Initially, Apple referred to the feature, but this has since been erased from the webpage describing email features in Ventura, at least in some countries. Hide My Email isn’t new to Ventura – it arrived in Monterey in 2021, but in Ventura Apple was expected to extend it to third parties. Messages Like its iOS counterpart, Messages on the Mac will allow users to edit a message once sent and recover accidentally deleted messages. Safari Passkeys will be generated as a more secure means of identifying you and are associated with Touch ID or Face ID. These will replace passwords. Apple claims that “Passkeys are unique digital keys that stay on the device and are never stored on a web server,” therefore they are more secure because it is impossible to leak one, or for anyone to phish one from you. You may like to read about what to expect from Apple in 2022 as well the latest information about iOS 16. | Operating Systems |
Android launched 10 years ago. In a decade, the popularity surged so much that there are more than 2.5 billion active Android devices in the world. Android’s modular approach, cost-effectiveness, and customization make it one of the most popular operating systems in the world. These Android devices are used in personal as well as the corporate environment. This is why more, managing the host of Android devices used in enterprise environments comes forth as a key challenge for enterprise IT teams. Manage Multiple Android Devices is critical since they not only impact the productivity of the workforce but can also cause serious security repercussions.
In modern enterprises, the workforce is spread across multiple geographical locations. This is why it is imperative for enterprises to manage multiple Android devices using remote control apps from a centralized console. Mobile device management is hence, a solution that can help mitigate the IT challenges while managing a large Android device inventory.
Scalefusion Android MDM fastens IT processes including device provisioning and policy enforcement. Furthermore, its web-based console- the Scalefusion dashboard enables the enterprise IT teams to manage multiple Android devices remotely from a PC.
In this article, we shall discuss how enterprise IT teams can manage multiple Android devices remotely from a PC using Scalefusion MDM
Manage Multiple Android Devices fleet remotely
1. Enrollment in Bulk
For enterprises having large device inventories, one of the most critical and time-consuming elements in digital transformation is provisioning the devices and enrolling them into the MDM. With Scalefusion, bulk enrollment is eased out using various enroll multiple Android devices at one go. IT administrators can create comprehensive device policies, known as ‘Device Profiles’ and apply it to multiple devices in one go.
You can enroll multiple Android devices in Scalefusion MDM in the following ways:
Android Zero-touch
IMEI-based enrollment- upload a CSV of IMEI numbers of devices to the Scalefusion dashboard and apply policies.
Serial number-based enrollment- upload a CSV of serial-number based devices and apply policies in bulk
2. Configuring devices in Android kiosk mode to Manage Multiple Android Devices
If the enterprise Android devices have to be locked down into Android Kiosk mode, Scalefusion MDM offers single and multi-app modes. This ensures that only one or more business-specific applications are allowed on the devices. Android kiosk mode enables enterprises to lock devices for business purposes.
With Scalefusion, multiple Android devices can be configured to run into kiosk mode via the Scalefusion dashboard. Using Scalefusion Device management, Android kiosk mode policy configurations can be enforced on multiple enrolled devices from the dashboard.
3. Application distribution and management
The IT teams have to provision multiple Android devices with business apps to ensure a consistent flow of business resources to the employees. Individually distributing applications on the devices is not possible in such cases and this is where Scalefusion application management comes into play. Using Scalefusion Android Device Management Software, IT teams can do the following:
Publish apps from the Google Play store on multiple devices
Remotely uninstall or delete the app
Remotely update the app or configure the app for personalization
Upload APK file of private apps and push it to the device inventory
4. Content management
The enterprise Android devices need to be provisioned with business content. For devices configured in Android kiosk mode or deployed as digital signage in multiple locations, the business IT teams need to update the content including images, videos and presentations remotely. Using Scalefusion, IT teams can push documents, audio files, images, and videos remotely from a PC. IT teams can also create and publish presentations on device groups having multiple devices added to them.
5. Location and Geofencing
Keeping a track of large device inventory is laborious. If Android devices are deployed in various locations or are distributed among last-mile delivery guys or used by frontline workers, the IT teams need to keep a constant check on the location of the device inventory. With Scalefusion, IT teams can keep real-time location tracking of Android devices and also set geofences. With geofencing, IT teams can apply virtual boundaries to physical locations.
6. Inventory overview
When IT teams manage multiple Android devices remotely, along with the location of the device inventory, they also need to keep a close eye on the other device parameters such as device battery, device data usage, storage usage, and also the security incidents that take place on any of the Android devices in the inventory. IT teams cannot keep an individual check on the devices and this is where Scalefusion Deepdive comes to the rescue.
With Scalefusion Deepdive, IT administrators can have deep insights into device inventory in one go.
7. Automated Compliance Checks
IT teams managing a large device inventory are often burdened with multiple recurring tasks that have to be carried out daily, or frequently in order to monitor and maintain the security of the device and the corporate data. To simplify these tasks of the IT admins, Scalefusion offers Workflow. With Scalefusion Workflows, IT admins can automate compliance alerts and schedule security checks on the entire device inventory. The reports of these checks and alerts are made available in the email id of the IT teams. These workflows can be created and pushed remotely from a PC, using Scalefusion MDM.
One of the critical challenges IT teams face, especially when Android devices are deployed to frontline workers, logistic employees, or last-mile delivery executives is to offer a seamless communication channel. Very often, these employees are out of conventional office areas and/or do not have email ids to communicate, collaborate and engage. Scalefusion extends the NuovoTeam app to facilitate communication between enterprise Android devices. Further, the IT teams can broadcast inventory-wide messages and announcements to multiple Android devices using Eva Channels.
This enables IT teams to manage communication on multiple Android devices remotely from a PC.
Scalefusion MDM extends diverse device management capabilities to manage multiple Android devices with ease. Deploying devices powered with Scalefusion can not only significantly save manual IT efforts but also improve the productivity of the IT teams.
Thousands of businesses rely upon Scalefusion for managing their mobile device, desktops, laptops and other endpoints
Renuka Shahane is a Sr. Content Writer at Scalefusion. An engineering graduate, an Apple junkie and an avid reader, she has a 5+ years of experience in content creation, content strategy and PR for technology and web based startups.
Subscribe to our newsletter
Exciting Products. Cutting-Edge Technology. Powerful Insights. Delivered Straight to Your Inbox! | Operating Systems |
Patch Management / Endpoint Security
It's no secret that keeping software up to date is one of the key best practices in cybersecurity. Software vulnerabilities are being discovered almost weekly these days. The longer it takes IT teams to apply updates issued by developers to patch these security flaws, the more time attackers have to exploit the underlying vulnerability. Once threat actors gain access to corporate IT ecosystems, they can steal or encrypt sensitive data, deploy ransomware, damage systems, and more. When there's a known exploit for a critical vulnerability, the need to deploy patches becomes critical.
At the same time, while IT teams race to keep their operating systems, business applications, and web browsers up to date and fully patched, they have to exercise caution, since applying patches without proper testing can introduce more problems than it solves. The reality is, many organizations are struggling to maintain the upper hand against threats. According to Action1's 2021 Remote IT Management Challenges Report, 78% of organizations admit that they failed to patch critical vulnerabilities in a timely manner during the past year, and 62% said they suffered a breach due to a known vulnerability for which patch was available but not yet applied.
Fortunately, effective and continuous patch management is within reach. This article explains the challenges and the key best practices for overcoming them.
Why is it challenging for IT teams to manage security updates?
Wide range of software to be kept updated
Organizations often focus on keeping their operating systems patched. While OS patching is certainly critical, third-party software is also subject to vulnerabilities that need to be addressed. Indeed, organizations use a wide range of third-party applications, from databases to web browsers, which need to be constantly patched, too.
Infrastructure specifics
While small offices may have only a few workstations and servers to worry about, large organizations often have a huge number of devices to keep patched. And it's not just the sheer volume that's a problem — each device might have its own hardware configuration and installed software, which adds a great deal of complexity to the patch management process.
Remote work
Moreover, the move to a work-from-anywhere format means IT teams often do not have direct access to the IT assets they need to update, which makes it even more harder to keep all corporate machines up to date with the latest security updates. Indeed, on-premises patch management tools have become outdated in the modern era of remote and hybrid work.
Lack of effective processes and tools
The Action1 study found that 38% of IT teams cannot get information about all updates in one place and prioritize them effectively, and nearly as many (37%) say they have to use too many non-integrated tools to track and deploy updates. Moreover, some patch management solutions are often so complicated that they require a dedicated specialist to use.
What are the best practices for managing security updates?
To establish an effective patch management process, consider the following best practices: Inventory your systems and software — You can't fix what you don't know about. Scan your environment for assets and installed software, and keep track of your inventory.
Group your assets — Categorize your endpoints based on their installed operating systems, services, and applications, so you can deploy patches to all affected assets simultaneously.
Prioritize risks — Determine which of your assets are most critical from both a security perspective and a business continuity standpoint. Key questions to answer include: Which assets must be patched immediately?
What types of patches need to be deployed promptly?
What constraints are there around patching certain assets during working hours? If an update must be applied right away, how can you mitigate the impact on the business? Test patches before deploying them organization-wide — Before rolling out a patch across all your endpoints, test it on small control group.
Apply patches promptly — Don't give attackers extra time to exploit known vulnerabilities. In particular, patches that address critical flaws should be applied without undue delay.
Stay on top of known vulnerabilities — If you not aware of a threat, you can't mitigate it. While it can be quite cumbersome to review all CVEs, at least get into the habit of regularly reviewing news in media as well as vulnerability digests from patch management software vendors. Automate — No one can manage the modern flood of patches effectively using a manual approach. A solid patch management software will save you time and ensure far more reliable results by streamlining the patching process. Cover all your devices — Be sure that your patch management process and solution cover not just in-office machines but all your remote endpoints as well. What solutions can help?
The Action1 cloud-native patch management platform supports all of the best practices detailed above. It relieves IT teams from the strain of manual patching by providing them with intelligent automation capabilities and empowering them to enforce continuous patch compliance for their remote and in-office machines. Even better, you can cover your first 100 endpoints completely free. Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post. | Operating Systems |
Intel has published a new whitepaper (PDF) that envisions simplifying its processor instruction set architecture (ISA). The main thrust of the proposed move would be to pare back the extensive legacy support and go 64-bit only with a new and simplified 'Intel x86S' architecture. Several technical benefits are outlined in an Intel developer blog post. In summary, the legacy reduced x86S architecture removes outdated execution modes to benefit upcoming hardware, firmware, and software implementations.
Many contemporary PC users who enjoy using the latest Windows applications and games will have moved to 64-bit Windows during the Windows 7 era. This coincides with the time when system RAM amounts above 4 GB became commonplace (a 32-bit OS can only address 3.2 GB of RAM), and 64-bit applications and games started to become mainstream. So, with the current Windows 11 OS being 64-bit only and apps and games sucking up gigabytes of RAM, it seems reasonable for Intel to want to consign architectural considerations spanning all the way back to the original 8086 chip to history.
"Intel 64 architecture designs come out of reset in the same state as the original 8086 and require a series of code transitions to enter 64-bit mode," Intel explains with regard to its legacy support. "Once running, these modes are not used in modern applications or operating systems."
So, it is easy to understand there will be benefits from architectural pruning, and the complex booting process outlined above would be the first benefit observed by users of new Intel x86S chips. What are other benefits to users and developers? Intel provides the following bullet points:
- Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use.
- Removing ring 1 and 2 (which are unused by modern software) and obsolete segmentation features like gates.
- Removing 16-bit addressing support.
- Eliminating support for ring 3 I/O port accesses.
- Eliminating string port I/O, which supported an obsolete CPU-driven I/O model.
- Limiting local interrupt controller (APIC) use to X2APIC and remove legacy 8259 support.
- Removing some unused operating system mode bits.
For those interested in running older OSes and software on the latest Intel hardware, Intel suggests that there are mature virtualization-based software solutions and that users can employ virtualization hardware (VMX) "to deliver a solution to emulate features required to boot legacy operating systems." Ardent retro computing fans will also collect and use old PC systems for running their ancient software libraries. Earlier this week, we noted that there are new Intel 386 and Intel 8088 portables being developed and sold online.
Those who think that they will be impacted by the proposed Intel 64-bit only x86S architecture transition should take a closer look at the linked whitepaper, which Intel appears to have published to gauge user/developer reaction and potentially gather feedback. | Operating Systems |
macOS Sonoma is on track to be the first macOS update to launch alongside Apple‘s other operating system updates in years. That said, when Apple announces the release date of iOS 17, iPadOS 17, tvOS 17, and watchOS 10, it’s very likely that macOS Sonoma will launch alongside them. Here’s why.
If you compare previous macOS updates, they were tied with iPadOS new features – and they were all delayed as they needed more testing. For example, in 2022, Stage Manager was the greatest function that arrived with macOS Ventura and iPadOS 16.
During beta testing, Apple removed the ability for users to test Stage Manager as it offered a poor experience. Then, Apple paused iPadOS 16 beta testing, and a few weeks later, it finally released an iPadOS 16.1 beta version while it was still seeding a beta version of iOS 16.0.
With that, iOS 16 was released in September 2022 alongside tvOS 16 and watchOS 9, but iPadOS 16 and macOS Ventura were only released by October.
A year before that, Apple tested Universal Control, a bold feature that seamlessly integrated iPad and Mac workflow. A similar issue occurred, and while Apple released iPadOS 15 alongside iOS 15, macOS Monterey was only released by the end of October. Universal Control, on its way, was made available only later in 2022 with iPadOS 15.4 and macOS 12.3.
For the last example, macOS Big Sur offered a design revamp for macOS, as it was the year Apple finally released its first Macs with its own silicon. While all software updates were made available by September, macOS Big Sur was released by November 2020.
With all that in mind, here’s why macOS Sonoma being released alongside iOS 17 is bad news.
What new features?
I believe macOS Sonoma will release alongside iOS 17 because it doesn’t offer groundbreaking features. During the WWDC 2023 keynote, Apple spent most of macOS Sonoma’s time talking about new wallpapers and the ability to add widgets to the Home Screen.
The only integration with iPadOS 17 is Stage Manager’s improvements. Still, it’s the first time in two years that Apple isn’t developing a new technology to tighten integration between these two operating systems.
In addition, while Apple is still polishing iOS 17, macOS Sonoma almost feels the same ever since beta testing started a month ago. That said, if the Cupertino firm isn’t planning to add any groundbreaking change in the next beta or so, the possibility of having all operating systems launch alongside is huge.
BGR will let you know once Apple announces the release date of iOS 17, macOS Sonoma, and so on. | Operating Systems |
Monday a researcher with Google Information Security posted about a new vulnerability he independently found in AMD's Zen 2 processors. Tom's Hardware reports: The 'Zenbleed' vulnerability spans the entire Zen 2 product stack, including AMD's EPYC data center processors and the Ryzen 3000/4000/5000 CPUs, allowing the theft of protected information from the CPU, such as encryption keys and user logins. The attack does not require physical access to the computer or server and can even be executed via JavaScript on a webpage...
AMD added the AMD-SB-7008 Bulletin several hours later. AMD has patches ready for its EPYC 7002 'Rome' processors now, but it will not patch its consumer Zen 2 Ryzen 3000, 4000, and some 5000-series chips until November and December of this year... AMD hasn't given specific details of any performance impacts but did issue the following statement to Tom's Hardware: "Any performance impact will vary depending on workload and system configuration. AMD is not aware of any known exploit of the described vulnerability outside the research environment..."
AMD describes the exploit much more simply, saying, "Under specific microarchitectural circumstances, a register in "Zen 2" CPUs may not be written to 0 correctly. This may cause data from another process and/or thread to be stored in the YMM register, which may allow an attacker to potentially access sensitive information."
The article includes a list of the impacted processors with a schedule for the release of the updated firmware to OEMs.
The Google Information Security researcher who discovered the bug is sharing research on different CPU behaviors, and says the bug can be patched through software on multiple operating systems (e.g., "you can set the chicken bit DE_CFG[9]") — but this might result in a performance penalty.
Thanks to long-time Slashdot reader waspleg for sharing the news.
AMD added the AMD-SB-7008 Bulletin several hours later. AMD has patches ready for its EPYC 7002 'Rome' processors now, but it will not patch its consumer Zen 2 Ryzen 3000, 4000, and some 5000-series chips until November and December of this year... AMD hasn't given specific details of any performance impacts but did issue the following statement to Tom's Hardware: "Any performance impact will vary depending on workload and system configuration. AMD is not aware of any known exploit of the described vulnerability outside the research environment..."
AMD describes the exploit much more simply, saying, "Under specific microarchitectural circumstances, a register in "Zen 2" CPUs may not be written to 0 correctly. This may cause data from another process and/or thread to be stored in the YMM register, which may allow an attacker to potentially access sensitive information."
The article includes a list of the impacted processors with a schedule for the release of the updated firmware to OEMs.
The Google Information Security researcher who discovered the bug is sharing research on different CPU behaviors, and says the bug can be patched through software on multiple operating systems (e.g., "you can set the chicken bit DE_CFG[9]") — but this might result in a performance penalty.
Thanks to long-time Slashdot reader waspleg for sharing the news. | Operating Systems |
Less than 24 hours after issuing an urgent fix for a zero-day security vulnerability under active exploitation in the wild, Apple's patch rollout is being reported to break certain websites in Safari.
The bug is found in Apple's WebKit browser engine (CVE-2023-37450) and allows arbitrary code execution on fully patched iPhones, Macs, and iPads. It can be exploited in drive-by attacks by luring targets to boobytrapped webpages.
"Apple is aware of a report that this issue may have been actively exploited," the company said in its Rapid Security Response (RSR) advisories on Monday.
The RSRs offered updates to all three operating systems and the browser itself:
- iOS and iPadOS 16.5.1 (a)
- macOS 13.4.1 (a)
- Safari 16.5.2
Users should patch quickly, experts noted, if they can. "These exploits are usually executed silently," says Jamie Brummell, Socura co-founder and CTO. "They are effectively invisible, and the chances are that victims would never know they were targeted. Detailed forensic analysis would be needed to determine whether a device had been targeted after the fact."
However, in a surprise twist, users began reporting browser malfunctions in the wake of the patches' installation. According to postings in the official macOS Support Community and in the MacRumors user forum, some applications, including Facebook, Instagram, WhatsApp, and Zoom, started throwing "Unsupported Browser" errors in Safari after the updates were installed.
Users zeroed in on the extra "(a)" in the version number as the culprit; the unusual nomenclature gets in the way of the platforms' user-agent detection, they flagged.
Did Apple Withdraw the Patches?
MacRumors reported that the computing giant yanked the updates after the complaints, and some users noted that the latest patches no longer appear available for installation on any of the platforms (including on this author's iPhone, which shows iOS 16.5.1 as the latest available version despite having automatic updates enabled).
However, Apple has been mum on those reports, and it did not immediately respond to a request for comment from Dark Reading on the status of the patch process. Meanwhile, the new patches are still listed on the company's security advisory and RSR page.
"This patch was rapid in name, and rapid in nature," Brummell says. "Reports suggest it has been pulled by Apple because it was causing some websites to break. This is the challenge with rapidly developed patches. They can result in unexpected issues due to the limited time the vendor has to test them."
Rapid Security Response: Too Much, Too Soon?
This is only the second time Apple has deployed its RSR emergency update protocol, which was rolled out earlier this year in an effort to be more agile in security patching. The idea is to push out single-issue fixes as they're needed, rather than use more traditional periodic updates that contain a glut of fixes and feature updates all at once.
The first RSR also had problems and didn't install properly on iPhones, so it's clear that Apple is still working out the kinks in the scheme, Brummell notes.
As the patch confusion clears on the zero-day, exploits are likely continuing. Worried iPhone users at least do have recourse even so, against this and other Apple zero-days.
“One of the only effective things iPhone users can do to defend against these zero-days is to reboot daily," Brummell says. "Gaining persistence on iPhone is extremely hard, so restarting usually kills the threat actor's code, at least until the device gets exploited again."
He also points out that Apple Lockdown Mode for all platforms can stop some of these exploits from working, "by blocking Web-based scripts, risky message attachment types, and more.” | Operating Systems |
Latest in tech Now playing The iPhone turns 15 today. See CNN's report on its debut in 2007 Now playing Misinformation, not machines, biggest election vulnerability, hackers say Xiaomi Now playing Chinese tech company reveals robot weeks before Tesla YouTube / Sufficiently Advanced Now playing Watch snake walk using robotic legs Mike Andronico/CNN Now playing See Samsung's latest foldable phones Now playing Air conditioning is bad for the planet. Here are some possible solutions Reuters Now playing Dogs in Tokyo cool down with wearable fans Now playing How this smart band can prevent overheating ByFusion Now playing Are these building blocks a solution to the plastic problem? Garner Holt Productions Now playing This animatronic Baby Yoda puppet looks like it's alive Now playing Watch chess-playing robot break boy's finger twitter/magpiepants Now playing People are posting their cats' reactions to this new video game Now playing Amazon cracking down on fake reviews Baidu Now playing Tech giant's new robotaxi with removable steering wheel not legal yet in China CNN Now playing Twitter lawyer: No 'exit ramp' for Elon Musk out of takeover deal Now playing Metaverse expert predicts the future of the internet CNN Business — Apple is directing users of most of its devices to update their software after the company discovered a vulnerability in its operating systems that it says “may have been actively exploited.” In security updates posted online on Wednesday and Thursday, Apple said the vulnerability affects iPhones dating back to the 6S model, iPad 5th generation and later, iPad Air 2 and later, iPad mini 4 and later, all iPad Pro models and the 7th generation iPod touch. Apple (AAPL) said the vulnerabilities give hackers the ability to take control of a device’s operating system to “execute arbitrary code” and potentially infiltrate devices through “maliciously crafted web content.” The vulnerability also extends to Mac computers running the company’s Monterey OS as well as Apple’s Safari browser on its Big Sur and Catalina operating systems, the company said in a subsequent update. Cybersecurity experts urged Apple users to update their devices, with the US government’s Cybersecurity and Infrastructure Security Agency warning that “an attacker could exploit one of these vulnerabilities to take control of an affected device.” The agency said affected users should “apply the necessary updates as soon as possible.” | Operating Systems |
After a month of beta testing, Apple has now released watchOS 10 public beta 1. Anyone enrolled in the Apple Beta Software Program can try the new features of this upcoming software, but be aware: unlike other beta operating systems, you can’t downgrade from watchOS 9 to watchOS 10. So once you install this update, you can’t go back, and you’ll have to wait until the official release later this fall.
That being said, watchOS 10 public beta brings a major update to the Apple Watch, as this marks the tenth update cycle for this operating system. Apple says all Apple Watch apps use the entire display to create new places for content, so you can see and do more, which is especially useful for larger displays. In addition, Apple added two new Watch Faces, Pallet and Snoopy.
With Smart Stack, you get the information you need below any Watch Face. You just need to turn the Digital Crown to reveal widgets in the Smart Stack. It includes multiple timers, your next meeting, music playing, and more. Lastly, the Control Center is now available when pressing the side button.
watchOS 10 beta also revamps Cycling and Hiking workouts. For mental health awareness, watchOS 10 beta lets you log your state of mind by scrolling through engaging visuals to help you select how you’re feeling at that moment and during the day overall. Apple wants you to stay consistent with notifications and complications on watch faces. In addition, your Apple Watch Series 6 (and later), SE 2, or Ultra can identify how much time you spend in daylight thanks to its ambient light sensor.
Besides watchOS 10 public beta 1, Apple is also seeding the first public test versions of iOS 17, iPadOS 17, macOS Sonoma, tvOS 17, and HomePod Software Version 17.
BGR’s coverage of watchOS 10 so far
- Apple execs explain why watchOS 10 still lacks third-party faces
- Apple Watch’s new Activity Rings in watchOS 10 could’ve been so much better
- watchOS 10 makes the Apple Watch less dependent on the iPhone with this Wallet feature
- watchOS 10 will bring two new faces for Apple Watch users
- watchOS 10 revamps Cycling and Hiking workouts with these features | Operating Systems |
The perfect hybrid machine that’s just as good a tablet as it is a laptop still doesn’t exist. But throughout last year, companies like Microsoft, Apple and Google continued to improve their operating systems for machines that do double duty. Windows 11 has features that make it friendlier for multi-screen devices, while Android has been better optimized for larger displays. Plus, with the rise of ARM-based chips for laptops, especially Apple’s impressive M series, prospects for a powerful 2-in-1 with a vast touch-friendly app ecosystem is at an all-time high.
Even the best 2-in-1 laptops still have their limits, of course. Since they’re smaller than proper laptops, they tend to have less-powerful processors. Keyboards are often less sturdy, with condensed layouts and shallower travel. Plus, they’re almost always tablets first, leaving you to buy a keyboard case separately. (And those ain’t cheap!) So, you can’t always assume the advertised price is what you’ll actually spend on the 2-in-1 you want.
Sometimes, getting a third-party keyboard might be just as good, and they’re often cheaper than first-party offerings. If you’re looking to save some money, Logitech’s Slim Folio is an affordable option, and if you don’t need your keyboard to attach to your tablet, Logitech’s K780 Multi-Device wireless keyboard is also a good pick.
While we’ve typically made sure to include a budget 2-in-1 laptop in previous years, this time there isn’t a great choice. We would usually pick a Surface Go, but the latest model is still too expensive. Other alternatives, like cheaper Android tablets, are underpowered and don’t offer a great multitasking interface. If you want something around $500 that’s thin, lightweight and long-lasting, you’re better off this year looking at a conventional laptop (like those on our best budget PCs list).
When you’re shopping for a 2-in-1, there are some basic criteria to keep in mind. First, look at the spec sheet to see how heavy the tablet is (alone, and with the keyboard). Most modern hybrids weigh less than 2 pounds, with the 1.94-pound Surface Pro 9 being one of the heaviest around. The iPad Pro 12.9 (2022) and Samsung’s Galaxy Tab S8+ are both slightly lighter. If the overall weight of the tablet and its keyboard come close to 3 pounds, you’ll be better off just getting an ultraportable laptop.
See Also:
You’ll also want to opt for an 11-inch or 12-inch screen instead of a smaller 10-inch model. The bigger displays will make multitasking easier, plus their companion keyboards will be much better spaced. Also, try to get 6GB of RAM if you can for better performance — you’ll find this in the base model of the Galaxy Tab S7+, while this year’s iPad Pro and the Surface Pro 8 start with 8GB of RAM.
Finally, while some convertible laptops offer built-in LTE or 5G connectivity, not everyone will want to pay the premium for it. An integrated cellular radio makes checking emails or replying to messages on the go far more convenient. But it also often costs more, and that’s not counting what you’ll pay for data. And, as for 5G — you can hold off on it unless you live within range of a mmWave beacon. Coverage is still spotty and existing nationwide networks use the slower sub-6 technology that’s barely faster than LTE.
Best overall: Surface Pro 9 (Intel)
There’s no beating the Surface series when it comes to 2-in-1s. They’re powerful, sleek tablets running an OS that’s actually designed for productivity. The Surface Pro 9 is Microsoft’s latest and great tablet, and it builds upon the already excellent Pro 8. It features speedy 12th-gen Intel CPUs and all of the major upgrades from last year, including a 120Hz display and a more modern design. It’s the best implementation of Microsoft’s tablet PC vision yet.
Don’t confuse this with the similarly named Surface Pro 9 with 5G, though, which has a slower ARM processor and inferior software compatibility. Built-in cellular is nice and all, but the Intel Pro 9 is a far better PC.
Like most of the other convertible laptops on this list, the Pro 9 doesn’t come with a keyboard cover — you’ll have to pay extra for that. That’s a shame, considering it starts at $1,000. Microsoft offers a variety of Type Covers for its Surface Pros ranging from $100 to $180, depending on whether you want a slot for a stylus. But at least they’re comfortable and well-spaced. You can also get the Surface Slim Pen 2 ($130) for sketching out your diagrams or artwork, which features haptic feedback for a more responsive experience.
Best for Apple users: 12.9-inch iPad Pro
If you’re already in the Apple ecosystem, the best option for you is obviously an iPad. The 12-inch Pro is our pick. Like older models, this iPad Pro has a stunning 12.9-inch screen with a speedy 120Hz refresh rate, as well as mini-LED backlighting. This year, it includes Apple’s incredibly fast M2 chip and more battery life than ever before.
Apple’s Magic Keyboard provides a satisfying typing experience, and its trackpad means you won’t have to reach for the screen to launch apps. But it’ll also cost you an extra $300, making it the most expensive case on this list by a lot. The iPad also lacks a headphone jack and its webcam is awkwardly positioned along the left bezel when you prop it up horizontally, so be aware that it’s still far from a perfect laptop replacement. Still, with its sleek design and respectable battery life, the iPad Pro 12.9 is a good 2-in-1 for Apple users.
Best for Android users: Samsung Galaxy Tab S8+
While Windows is better than iPadOS and Android for productivity, it lags the other two when it comes to apps specifically designed for touchscreens. If you want a tablet that has all the apps you want, and only need it to occasionally double as a laptop, the Galaxy Tab S8+ is a solid option. You’ll enjoy watching movies and playing games on its gorgeous 12.4-inch 120Hz AMOLED screen, and Samsung includes the S Pen, which is great for sketching and taking notes. The Snapdragon 8 Gen 1 chip and 8GB of RAM keep things running smoothly, too.
Last year, Samsung dramatically improved its keyboard case, making the Tab an even better convertible laptop. You could type for hours on this thing and not hate yourself (or Samsung). The battery life is also excellent, so you won’t need to worry about staying close to an outlet. The main caveat is that Android isn’t great as a desktop OS, even with the benefits of Android 12L. And while Samsung’s DeX mode offers a somewhat workable solution, it has plenty of quirks.
Cherlynn Low contributed to this report. | Operating Systems |
On Tuesday, Mozilla, creator of the web browser Firefox, announced that macOS Mojave and earlier won't get major updates past version 115 — but security updates will keep coming for a year.
The release notes state that users running macOS 10.12, 10.13, and 10.14 will be migrated to the ESR 115 version of Firefox so that they continue to receive important security updates.
According to the Mozilla Wiki, Firefox 116 is set to release in August 2023. It will require users to be running macOS Catalina 10.15 or newer.
A support post notes that Mozilla follows Apple's ongoing practice of offering support the three most recent releases of macOS. As the post points out, macOS Mojave 10.14 received its last security update in July 2021.
Mozilla goes on to explain that maintaining its web browser for obsolete operating systems can become costly for Mozilla and downright dangerous for users. Firefox ESR 115 will continue to receive important security updates until September 2024. | Operating Systems |
Have you ever accidentally deleted an important conversation on your iPhone? Or maybe it wasn’t an accident, and you deleted a conversation thread that has lasted for years out of a fit of rage? Maybe you’re like me, with fingers that have a mind all of their own, and they love to hit that delete button at the wrong time. Don't worry. It happens to the best of us.
Whether it was sentimental, funny or proof of your innocence, that precious text message you thought was gone forever might just be a few swipes away.
Before you get too excited and do your happy dance, let’s make sure your iPhone is operating on iOS 16 or later – as older operating systems do not offer this text message retrieval tool.
Check your operating system
First, open your Settings app
Second, click General
Third, select About
Under iOS Version, you are able to see what operating system your iPhone is running. Note, there is a new update of iOS Version 16.4 that you can learn all about here.
Now that we know our iPhones are operating on iOS 16 or later, we can proceed with retrieving those cherished text messages.
How to retrieve deleted messages
First, open up your Messages app and click on Edit in the top left corner. (If you have message filtering on, you will choose "Filters" in the top left instead.)
Next, select Show Recently Deleted at the bottom of the list. This will show you all of your deleted conversations from the past 30 days, with the most recently deleted messages at the top of the list. Please note, after 30 days the messages are lost forever.
To retrieve a deleted conversation, simply click on it and select the Recover button located in the bottom right corner of the screen.
A pop-up will then appear, asking you to confirm the retrieval. Simply click Recover [number of] Messages to confirm the retrieval.
The conversation will now be sent back to All Messages and will be restored to its original location.
Final Thoughts
So, my fellow texters, fear not the delete button. With this nifty feature from Apple, you can retrieve your messages and keep the conversation going. Now, let's get back to texting and making those fingers work their magic!
For more of my tips, subscribe to my free CyberGuy Report Newsletter by visiting CyberGuy.com/Newsletter.
Copyright 2023 CyberGuy.com. All rights reserved. | Operating Systems |
Apple is making it easy for everyone with an Apple ID to test developer beta versions of its different operating systems — iOS, iPadOS, macOS, watchOS, and tvOS.
Previously, developer beta releases were available to people paying a $99-a-year fee for Apple’s developer program — paying for the developer program also allows you to distribute your apps on the App Store. Now, as spotted by multiple people, Apple has updated its support page to indicate that anyone with an Apple ID can download the latest developer beta. This clarification came after multiple publications reported that Apple accidentally made the developer beta available to all users.
You can download and try the developer betas for iOS 17, iPadOS 17, or macOS Sonoma to check out new features. However, these are very early versions of the operating systems and it is not advisable to install them on your primary device as they are often buggy and can disrupt your daily work.
If you want to test some of these updates but don’t want to risk using unstable versions or potentially losing data, Apple will release public beta versions next month. Those versions are supposed to be more stable than the developer beta releases.
Apple started changing how it offers beta versions with iOS — having an Apple ID linked to a developer profile became mandatory. This move was designed to stop people from installing unauthorized profiles without registering for the developer program. But Apple is reversing its stance with today’s change as it has never been so easy to install the developer beta versions of next major operating system releases. | Operating Systems |
It appears that passkeys are now supported for Apple IDs, but only if you have the first beta for iOS 17 (or iPadOS 17 or macOS Sonoma). Beta users of Apple’s operating systems then have the ability to sign in anywhere that supports signing in with your Apple ID — covering not only Apple.com and icloud.com, but also anywhere else your Apple account is linked to — sans passwords, using just the biometrics on their iPhone or MacBook and a dream.
It works anywhere that supports signing in with your Apple ID. For example, if I want to sign in to Reddit and look at my favorite John Oliver-themed subreddits, I can tap the “Continue with Apple” button on the sign-in screen, and I’ll be given the option to sign in by scanning a QR code with my iPhone.
The feature appears to work on any Apple device — I was able to use a passkey successfully in Chrome, Safari, and Arc. When trying on a Windows PC in Chrome, the same “Sign in with iPhone” button showed up, but the request timed out when I scanned the QR code. On an Android phone, I didn’t see the button at all.
Once you’ve created the passkey on your iPhone, iPad, or Mac, it syncs across all of your Apple devices, letting any of them use the biometric logins set up on that system to sign in with your Apple ID. Since passkeys aren’t exclusively the domain of Apple, once it’s fully launched, you should be able to generate them on non-Apple devices for passwordless sign-in with your Apple ID, too, using Android or Windows using either the Chrome or Edge browser, which each support passkeys. | Operating Systems |
What products and services will benefit the most from recent developments in AI technology? We’re keeping close tabs on the question, curious if startups will be able to best lever AI improvements or if cloud platforms are the best positioned. Or maybe it’ll be the companies building generative AI models themselves?
I wonder if we’re missing a key type of software in the discussion: the operating system.
The Exchange explores startups, markets and money.
Last week, Microsoft announced a slew of new products and features, including Microsoft Copilot for Windows 11. Calling it an “everyday AI companion” that will live in Windows and other Microsoft products, the company says the product is an extension of the company’s previous efforts to bring Bing’s early AI tools to Windows. It’s also the first update to an operating system that I have been excited about in ages.
Ask yourself how much your computing experience has really changed from Windows 10 to Windows 11, or from the last version of macOS you used to whatever you are on now. (It turns out that I am running Ventura 13.5.2 at the moment. Who knew?!) The same question works for iOS and other operating systems I touch day to day. I bet that your answer is similar to my own: not much. Unless you’re using some variant of Linux, of course.
That’s because operating systems have become docile computing layers that mostly serve as a base to run other applications off. I don’t care much about what OS I use, because I use Chrome across all of them. I don’t really need to know much about the operating system that is running Chrome for me at that particular moment, I just need to use webapps, thank you very much.
This contented stagnation is not a bad thing, mind. The fact that iOS still shows you an app grid when you fire it up is because it’s a simple and intuitive way to show a user their arsenal of applications. Microsoft is similarly invested in the Start Menu, which has worked pretty well since I was trying to keep my parents off the phone so that I could stay online as a child. And the various flavors of Android have their own takes on the app drawer. | Operating Systems |
Everyone depends on OpenSSL. You may not know it, but OpenSSL is what makes it possible to use secure Transport Layer Security (TLS) on Linux, Unix, Windows, and many other operating systems. It's also what is used to lock down pretty much every secure communications and networking application and device out there. So we should all be concerned that Mark Cox, a Red Hat Distinguished Software Engineer and the Apache Software Foundation (ASF)'s VP of Security, this week tweeted, "OpenSSL 3.0.7 update to fix Critical CVE out next Tuesday 1300-1700UTC."How bad is "Critical"? According to OpenSSL, an issue of critical severity affects common configurations and is also likely exploitable. It's likely to be abused to disclose server memory contents, and potentially reveal user details, and could be easily exploited remotely to compromise server private keys or execute code execute remotely. In other words, pretty much everything you don't want happening on your production systems.Eep! Also: These cybersecurity vulnerabilities are most popular with hackers right now - have you patched them?The last time OpenSSL had a kick in its security teeth like this one was in 2016. That vulnerability could be used to crash and take over systems. Even years after it arrived, security company Check Point estimated it affected over 42% of organizations. This one could be worse. We can only hope it's not as bad as that all-time champion of OpenSSL's security holes, 2014's HeartBleed.So why announce the security hole before the patch is in? Cox explained, "That's our policy ... to provide folks with a date they know to be ready to parse an advisory and see if the issue affects them." But couldn't a hacker find it and exploit it as a zero-day? He doesn't think so. "Given the number of changes in 3.0 and the lack of any other context information, such scouring is very highly unlikely."Also: Linux devices are increasingly under attack from hackers, security researchers warnThere is another little silver lining in this dark cloud. This new hole only affects OpenSSL versions 3.0.0 through 3.0.6. So, older operating systems and devices are likely to avoid these problems. For example, Red Hat Enterprise Linux (RHEL) 8.x and earlier and Ubuntu 20.04 won't be smacked by it. RHEL 9.x and Ubuntu 22.04, however, are a different story. They do use OpenSSL 3.x.If you're a Linux user, you can check your own system by running the shell command: # openssl versionIn my case, my laptop in front of me is running Debian Bullseye, which uses OpenSSL 1.1, so this machine is good.But, if you're using anything with OpenSSL 3.x in -- anything -- get ready to patch on Tuesday. This is likely to be a bad security hole, and exploits will soon follow. You'll want to make your systems safe as soon as possible.Related stories:Red Hat Enterprise Linux 9: Security baked inSUSE doubles down on security in its latest SUSE Linux Enterprise 15 releaseCentOS Linux lives on in the new AlmaLinux 9 Editorial standards | Operating Systems |
Apple released iOS 17.1 on Wednesday to all users with features including improved AirDrop sharing and updates to Apple Music. The biggest change with this release is the ability to use AirDrop when you move out of the Wi-Fi range. AirDrop uses Bluetooth to securely create a peer-to-peer Wi-Fi network between two Apple devices when you are sharing content.
With this update, you can start transferring a file to a friend while you are in the same room. But if you have to leave the spot, the transfer can continue over cellular data.
This is useful when you have to send a bunch of photos and videos to a friend, but you have to leave. Until now, you could only transfer a few photos to them and then you had to use iMessage, WhatsApp, email, or any other method to send other photos.
You can turn this option off if you want to save mobile data by heading to Settings > General > Airdrop > Out of range.
With iOS 17.1 users can now add songs, albums, playlists, and artists to their Apple Music library just by tapping on the favorite button. Apple Music will use favorites to tweak music suggestions. You can easily add a track to your library by tapping on the star icon on the “Now Playing” widget. This gets rid of the confusing “love” icon, which was previously used as a signal to suggest more songs like that.
Other iOS 17.1 enhancements include the ability to share your contact details using NameDrop from an iPhone to an Apple Watch running the latest versions of the operating systems on these devices. This feature was previously limited to iPhones. There’s also a new Dynamic Island icon to indicate to a user if the flashlight is active.
Users can update to iOS 17.1 by heading to Settings > General > Software Update and following instructions.
Apple also rolled out WatchOS 10.1 on Wednesday with the DoubeTap feature that enables gesture-based interactions with different apps. | Operating Systems |
Apple announced the WWDC 2023 dates last week, so we already know when iOS 17, watchOS 10, and macOS 14 announcements will drop. But this year is different for Apple. The June 5th keynote will also feature Apple’s first-gen mixed reality headset and its operating system. That’s Apple’s priority right now, despite the rumored release delays.
A new report from Bloomberg’s Mark Gurman says that the AR/VR headset will be unveiled at WWDC. But Gurman also addressed Apple’s other operating systems. According to him, watchOS 10 should deliver an extensive upgrade featuring “notable changes” to the user interface. That already sounds more exciting than the iOS 17 upgrade, which should focus on performance improvements rather than new features.
Gurman said in his Power On newsletter that the June 5th WWDC 2023 keynote will be “one of the most importat days in its history:”
The headset will be a risky, but potentially monumental launch for Apple. It will herald mixed reality as its next major product category, offering a glimpse of a future where people are interacting with the world via headsets and not pocketable touch screens.
Recently, Ming-Chi Kuo detailed novel product delays for the mixed reality headset, prompting speculation that Apple might delay the announcement event. However, Gurman’s new report seems to leave no room for interpretation. The AR/VR headset will be Apple’s main priority at WWDC in early June.
Previous reports said that Apple devoted massive resources to the mixed reality headset, which would explain the less exciting upgrades for operating systems like iOS 17. iOS will still be the star of the show, but the OS update will not deliver any major iPhone features this year.
watchOS 10 will be more exciting, according to Gurman. The Apple Watch OS will deliver big changes to the user interface, although the Apple insider did not detail them:
I believe the new watchOS should be a fairly extensive upgrade — with notable changes to the user interface — unlike iOS 17. It’s important for watchOS to have a big year given that the Apple Watch hardware updates will be anything but major.
The watchOS 10 user interface overhaul should compensate for a lackluster hardware refresh this year. Indeed, Apple isn’t expected to deliver an Apple Watch Ultra 2 upgrade. The Apple Watch Series 9 could be a minor upgrade over last year’s model.
Meanwhile, the iPhone 15 Pro upgrade is going to be significant, especially the 15 Pro Max model. This would explain the more muted iOS 17 release.
If this information is accurate, and Gurman’s reports usually are, it further underscores the importance of Apple’s AR/VR headset. Other reports said Apple had prioritized the new product, anticipating less interesting software developments for Apple’s other products this year. | Operating Systems |
Windows 8 was known for its radical change to the OS app screen. Since then, Microsoft has reincorporated default features like the Start button in Windows 10 and 11.Image: George Dolgikh (Shutterstock)Poor, beleaguered Windows 7, 8, and 8.1 are now officially strolling the Elysian Fields of dead operating systems as Microsoft that the end of its support for them finally arrived on Tuesday.OffEnglishThough the OS will continue to function, the end of support means 8.1 will no longer receive any software or security updates. It also means the company won’t provide any technical support for either operating system. Windows 8 came out back in 2012, but the company released its 8.1 patch just a year later to address criticisms like a lack of a Start button and a lack of customization. That OS was finally succeeded by Windows 10 just three years later.But more than that, Jan. 10 is the day that Microsoft is truly ending any and all support for 8’s much more popular younger brother, Windows 7.Three years ago almost to the day, Microsoft officially announced it would stop doing software updates for Windows 7, an OS that originally came out in 2009. Still, the company did offer a hand with its Extended Security Updates (ESU) service for people running Windows 7 Professional and Windows 7 Enterprise. Still, those extended security patches didn’t include Microsoft Support, and users had to pay to sign up for the service. Windows 8.1 won’t be seeing any extended security support, likely because 8 proved to be a much less popular version of the OS than 7.Of course, both operating systems will still run on computers, but a lack of software patches does open up plenty of vulnerabilities. The last patch Windows 7 received for ESU customers was back in December.At the end of 2020, ZDNet reported there were still millions of computers running Windows 7 even though Microsoft ended full support. Official analytics from U.S. government agencies previously suggested that 8.5% of federal computer systems were still running Windows 7 three years ago. The latest numbers from Sunday show there are still over 33,000 federal computer systems running 7. Similarly, there are over 18,000 federal systems running Windows 8 or 8.1.In the meantime, adoption of Windows 11 has been pretty slow for Microsoft’s tastes. Analytics firm Statcounter reported at the tail end of last year that the latest OS accounts for just under 17% of Windows market share. It barely beat out Windows 7, which claimed 11%. Last year, the company included a host of new features for the OS, including new accessibility and security features, but that still hasn’t made enough of a case for the 68% of Windows 10 users to finally make the switch. | Operating Systems |
It’s free. It’s safer. And no more annoying Windows updates! The Indian government is looking to replace the Windows operating system on computers used in the Defence Ministry with a new indigenous operating system called Maya OS. Having a new operating system (OS), that too for the government is a challenge but the Ministry claims that Maya OS is being introduced to thwart cyber threats and also reduce the dependence on OS made by global tech companies, like Microsoft. Not to forget, Maya OS is free, so, the government will no longer have to spend on getting genuine Windows licences or pay for expensive Microsoft Word and other services.
The idea is pretty simple- cybercriminals create malware and viruses for any platform that is popular. Popularity of the platform helps in spreading malware quickly. With Windows being used by the majority of PC users across the world, it is the primary target for hackers. Another aspect to consider is that, when data is exchanged via emails, USB pen drives, portable hard drives, routers or any other end device or platform, hackers generally transmit a Windows payload because chances of infection increases due to its popularity, which makes PCs of government agencies immensely vulnerable.
Now, when a Windows malware gets transmitted to a PC running Maya OS, the malware will essentially have no effect at all. Emails with malware encoded PDF, JPEG, excel sheet and docs are usually seen as a primary entry point for hackers. So, if a government officer downloads an important PDF file from an email, without knowing that the PDF has embedded Windows malware, on a PC with Maya OS, then malware can’t do much.
The second round of protection comes from ‘Chakravyuh’- the homegrown anti-virus system that isolates data. Chakravyuh comes with Maya OS.
The government is ready to deploy Maya OS by the end of year. But what is the origin of Maya OS, what features will it offer and will the popular apps work on the new operating system? Here’s a detailed explainer on the new desi OS for PCs.
Maya OS - The New Desi Platform?
Maya OS has been ‘developed’ by the Ministry of Defence which is going to use it for the Army, Navy and other forces by the year-end. The Ministry claims Maya OS is based on the open-source Ubuntu platform which is a distribution of the Linux operating system. Now, there are around 300 active Linux distributions including Fedora, Peppermint, Kubuntu and Ubuntu. Having said that, Ubuntu is the most popular Linux distribution and there are at least 50 derivatives of Ubuntu, popular ones include Linux Mint, Elementary OS, Ubuntu Kylin and others.
The reason people choose to develop derivatives of Ubuntu is simple: it allows users to customise as per preferences while allowing access to the Ubuntu apps. Also, for peripherals like keyboard, mouse, printers, etc, finding drivers that support Ubuntu is easy. So, Maya OS is just another Ubuntu derivative with access to free popular Ubuntu apps. The Ubuntu app store offers free alternatives to Microsoft Word, photoshop, browsers, etc. This will drastically reduce software licence costs for the government.
Apart from Apple’s macOS and Microsoft’s Windows, Linux distributions are the most popular operating systems that are used globally for different purposes. In fact, Windows too is ‘hugely inspired’ by Linux.
The Ministry has designed the platform with an interface and functionalities that aim to thwart any possible cyber threats from bad actors. The OS was developed in six months as per reports, with the help of a dedicated team from agencies like Defence Research and Development Organisation (DRDO), the Centre for Development of Advanced Computing (C-DAC), and the National Informatics Centre (NIC).
The name Maya OS has been derived from the Hindi word for illusion. It is said that the name Maya refers to the idea of the hackers facing the illusion when they try to breach the system of the Defence Ministry. The OS gets a layer of security in the form of Chakravyuh which is an anti-malware and antivirus version for the Maya OS, looking to deny access to these malicious actors who seek to compromise data.
Maya OS - Like Microsoft Windows But Safer Than Windows
The UI of Ubuntu-based Maya OS has a very Windows-like feel, which makes it easier for users to understand as they already know the features and functions of the OS. In addition to this, Maya OS also offers support for cloud storage, biometric authentication, and digital signatures. In fact, even for personal use, you can buy a laptop with free DOS or without Windows for a cheaper price and install Ubuntu on it. You get access to the free Ubuntu app store. While the apps may look different, it is just about getting used to.
The Defence Ministry has set a deadline of the end of 2023 for installing Maya OS on all its registered computers. The new indigenous OS is also expected to be deployed for systems used by the Army, Navy and the Air Force in the near future. | Operating Systems |
A probable early driver of Alzheimer's disease is the accumulation of molecules called amyloid peptides. These cause cell death, and are commonly found in the brains of Alzheimer’s patients. Researchers at Chalmers University of Technology, Sweden, have now shown that yeast cells that accumulate these misfolded amyloid peptides can recover after being treated with graphene oxide nanoflakes.
Alzheimer’s disease is an incurable brain disease, leading to dementia and death, that causes suffering for both the patients and their families. It is estimated that over 40 million people worldwide are living with the disease or a related form of dementia. According to Alzheimer’s News Today, the estimated global cost of these diseases is one percent of the global gross domestic product.
Misfolded amyloid-beta peptides, Aβ peptides, that accumulate and aggregate in the brain, are believed to be the underlying cause of Alzheimer’s disease. They trigger a series of harmful processes in the neurons (brain cells) – causing the loss of many vital cell functions or cell death, and thus a loss of brain function in the affected area. To date, there are no effective strategies to treat amyloid accumulation in the brain.
Researchers at Chalmers University of Technology have now shown that treatment with graphene oxide leads to reduced levels of aggregated amyloid peptides in a yeast cell model.
“This effect of graphene oxide has recently also been shown by other researchers, but not in yeast cells”, says Xin Chen, Researcher in Systems Biology at Chalmers and first author of the study. “Our study also explains the mechanism behind the effect. Graphene oxide affects the metabolism of the cells, in a way that increases their resistance to misfolded proteins and oxidative stress. This has not been previously reported.”
Investigating the mechanisms using baker’s yeast affected by Alzheimer’s disease
In Alzheimer’s disease, the amyloid aggregates exert their neurotoxic effects by causing various cellular metabolic disorders, such as stress in the endoplasmic reticulum – a major part of the cell, in which many of its proteins are produced. This can reduce cells’ ability to handle misfolded proteins, and consequently increase the accumulation of these proteins.
The aggregates also affect the function of the mitochondria, the cells’ powerhouses. Therefore, the neurons are exposed to increased oxidative stress (reactive molecules called oxygen radicals, which damage other molecules); something to which brain cells are particularly sensitive.
The Chalmers researchers have conducted the study by a combination of protein analysis (proteomics) and follow-up experiments. They have used baker's yeast, Saccharomyces cerevisiae, as an in vivo model for human cells. Both cell types have very similar systems for controlling protein quality. This yeast cell model was previously established by the research group to mimic human neurons affected by Alzheimer’s disease.
“The yeast cells in our model resemble neurons affected by the accumulation of amyloid-beta42, which is the form of amyloid peptide most prone to aggregate formation”, says Xin Chen. “These cells age faster than normal, show endoplasmic reticulum stress and mitochondrial dysfunction, and have elevated production of harmful reactive oxygen radicals.”
High hopes for graphene oxide nanoflakes
Graphene oxide nanoflakes are two-dimensional carbon nanomaterials with unique properties, including outstanding conductivity and high biocompatibility. They are used extensively in various research projects, including the development of cancer treatments, drug delivery systems and biosensors.
The nanoflakes are hydrophilic (water soluble) and interact well with biomolecules such as proteins. When graphene oxide enters living cells, it is able to interfere with the self-assembly processes of proteins.
“As a result, it can hinder the formation of protein aggregates and promote the disintegration of existing aggregates”, says Santosh Pandit, Researcher in Systems Biology at Chalmers and co-author of the study. “We believe that the nanoflakes act via two independent pathways to mitigate the toxic effects of amyloid-beta42 in the yeast cells.”
In one pathway, graphene oxide acts directly to prevent amyloid-beta42 accumulation. In the other, graphene oxide acts indirectly by a (currently unknown) mechanism, in which specific genes for stress response are activated. This increases the cell’s ability to handle misfolded proteins and oxidative stress.
How to treat Alzheimer’s patients is still a question for the future. However, according to the research group at Chalmers, graphene oxide holds great potential for future research in the field of neurodegenerative diseases. The research group has already been able to show that treatment with graphene oxide also reduces the toxic effects of protein aggregates specific to Huntington’s disease in a yeast model.
“The next step is to investigate whether it is possible to develop a drug delivery system based on graphene oxide for Alzheimer’s disease.” says Xin Chen. “We also want to test whether graphene oxide has beneficial effects in additional models of neurodegenerative diseases, such as Parkinson’s disease.”
More about: proteins and peptides
Proteins and peptides are fundamentally the same type of molecule and are made up of amino acids. Peptide molecules are smaller – typically containing less than 50 amino acids – and have a less complicated structure. Proteins and peptides can both become deformed if they fold in the wrong way during formation in the cell. When many amyloid-beta peptides accumulate in the brain, the aggregates are classified as proteins.
Journal
Advanced Functional Materials
Method of Research
Experimental study
Subject of Research
Cells
Article Title
Graphene Oxide Attenuates Toxicity of Amyloid-β Aggregates in Yeast by Promoting Disassembly and Boosting Cellular Stress Response
Article Publication Date
7-Jul-2023 | Biology |
Study: Popular dietary supplement causes cancer risk, brain metastasis University of Missouri researchers made the discovery while using bioluminescent imaging technology to study how nicotinamide riboside supplements work inside the body. Nov. 11, 2022
Contact: Eric Stann, 573-882-3346, [email protected]
Elena Goun
While previous studies have linked commercial dietary supplements like nicotinamide riboside (NR), a form of vitamin B3, to benefits related to cardiovascular, metabolic and neurological health, new research from the University of Missouri has found NR could actually increase the risk of serious disease, including developing cancer.
The international team of researchers led by Elena Goun, an associate professor of chemistry at MU, discovered high levels of NR could not only increase someone’s risk of developing triple-negative breast cancer, but also could cause the cancer to metastasize or spread to the brain. Once the cancer reaches the brain, the results are deadly because no viable treatment options exist at this time, said Goun, who is the corresponding author on the study.
“Some people take them [vitamins and supplements] because they automatically assume that vitamins and supplements only have positive health benefits, but very little is known about how they actually work,” Goun said. “Because of this lack of knowledge, we were inspired to study the basic questions surrounding how vitamins and supplements work in the body.”
Following the death of her 59-year-old father only three months after being diagnosed with colon cancer, Goun was moved by her father’s passing to pursue a better scientific understanding of cancer metabolism, or the energy through which cancer spreads in the body. Since NR is a known supplement for helping increase levels of cellular energy, and cancer cells feed off of that energy with their increased metabolism, Goun wanted to investigate NR’s role in the development and spread of cancer.
“Our work is especially important given the wide commercial availability and a large number of ongoing human clinical trials where NR is used to mitigate the side effects of cancer therapy in patients,” Goun said.
The researchers used this technology to compare and examine how much NR levels were present in cancer cells, T cells and healthy tissues.
“While NR is already being widely used in people and is being investigated in so many ongoing clinical trials for additional applications, much of how NR works is a black box — it’s not understood,” Goun said. “So that inspired us to come up with this novel imaging technique based on ultrasensitive bioluminescent imaging that allows quantification of NR levels in real time in a non-invasive manner. The presence of NR is shown with light, and the brighter the light is, the more NR is present.”
Goun said the findings of the study emphasize the importance of having careful investigations of potential side effects for supplements like NR prior to their use in people who may have different types of health conditions. In the future, Goun would like to provide information that could potentially lead to the development of certain inhibitors to help make cancer therapies like chemotherapy more effective in treating cancer. The key to this approach, Goun said, is to look at it from a personalized medicine standpoint.
“Not all cancers are the same in every person, especially from the standpoint of metabolic signatures,” Goun said. “Often times cancers can even change their metabolism before or after chemotherapy.”
“A bioluminescent-based probe for in vivo non-invasive monitoring of nicotinamide riboside uptake reveals a link between metastasis and NAD+ metabolism” was published in Biosensors and Bioelectronics. Funding was provided by grants from the European Research Council (ERC-2019-COG, 866338) and Swiss National Foundation (51NF40_185898), as well as support from NCCR Chemical Biology.
Other authors on the study are Arkadiy Bazhin, Pavlo Khodakivskyi, Ekaterina Solodnikova and Aleksey Yevtodiyenko at MU; Tamara Maric at the Swiss Federal Institute of Technology; Greta Maria Paola Giordano Attianese, George Coukos and Melita Irving at The Ludwig Institute for Cancer Research in Switzerland; and Magali Joffraud and Carles Cantó at the Nestlé Institute of Health Sciences in Switzerland. Bazhin, Khodakivskyi, Mikhaylov, Solodnikova, Yevtodiyenko and Goun are also affiliated with the Swiss Federal Institute of Technology. Mikhaylov, Yevtodiyenko and Goun are also affiliated with SwissLumix SARL in Switzerland. Stay up-to-date on all things Mizzou when you subscribe to the Show Me Mizzou newsletter. Issues will arrive in your inbox every Tuesday, Wednesday and Thursday. Subscribe | Biology |
Berlin, Nevada, is a treasure chest for paleontologists. Just down the road from now-abandoned gold and silver mines, a rockbound collection of bones hints at an even richer past. The Berlin-Ichthyosaur State Park is teeming with dozens of fossils of ancient marine reptiles. That bone bed is so abundant and weird that researchers have been scratching their heads over it for decades.“There are sites with way more dense occurrences of ichthyosaur skeletons, including places in Chile and Germany,” says Nick Pyenson, curator of fossil marine mammals at the Smithsonian National Museum of Natural History. “But this place, Berlin-Ichthyosaur in eastern Nevada, has really escaped explanation for a long time.” In one particular quarry, at least seven individuals from the genus Shonisaurus—a bloated, bus-sized dolphin with four limb-like flippers—lay essentially stacked atop one another.Previous hypotheses largely focused on physical or environmental reasons for the cluster of fossils. One suggested that the animals had gotten stranded in shallow water and died as a group some 230 million years ago. Or maybe a volcanic eruption did them in. Pyenson had another hunch, one that his team tested using 3D visualizations of the site, as well as fossils and other clues in the geological record.Writing in the journal Current Biology, today Pyenson’s team presents evidence that the shonisaurs came there to reproduce. The team concludes that the animals migrated long distances to give birth, like some whales do today. The discovery not only represents an example of “convergent evolution,” in which the same traits independently evolve in different species, but also the oldest example of migration in groups to a designated calving ground.“They're making quite a convincing case,” says Lene Liebe Delsett, a vertebrate paleontologist at the University of Oslo, Norway, who was not involved in the study. “Ichthyosaurs were the first large marine tetrapods. And throughout the Triassic, they varied quite a lot, so there was a large diversity. It's just a very interesting period of time to know more about.”The origin story of the shonisaurs begins with death—a lot of it.Some 251 million years ago, between the Permian and Triassic periods, Earth's biggest extinction event annihilated about 95 percent of all marine species. This so-called “Great Dying” mowed down the diverse landscape of creatures in the ocean.Some of the animals that grew back in their place turned out to be weirder and larger than ever before.The ensuing Triassic started an evolutionary arms race. Prey evolved harder shells and better mobility, predators crunched through ammonite shells and hunted fish better than ever, and so on. Ichthyosaurs, which evolved from terrestrial reptiles into new species of various sizes, partly drove this pressure and quickly dominated the ocean. The Shonisaurus genus, in particular, grew to be some of the largest marine predators around. “They achieved whale sizes before anything else,” says Pyenson.Pyenson is normally more of a whale guy; he specializes in mammals, which split from reptiles about 325 million years ago. But ancient marine reptiles like those under the order Ichthyosaur bear many similarities to existing marine mammals. Their ancestors came from land, they birthed live young, they had similar flippers, and they are tetrapods, meaning four-limbed. And Pyenson is well versed in this type of mystery. About a decade ago in Atacama, Chile, he and his South American collaborators used 3D mapping and chemical analyses to show that a tight cluster of at least 40 fossilized whales must have died from a toxic algal bloom 7 to 9 million years ago.“It was a neat proof of concept, and a surprising discovery,” Pyenson says, because 3D scanning allowed for the data to be analyzed away from the actual site over long time periods. “It was innovative, because none of those fossil whales ever left Chile.”When it came to the mystery of Nevada’s boneyard, he recalls chatting with paleontologist and study coauthor Neil Kelley of Vanderbilt University about the convergent evolution of the traits shared by ichthyosaurs and marine mammals. “We put our heads together,” says Pyenson. What if there were not only convergences in anatomy, but behavior?They turned to chemical sampling and huge 3D scans of the site to test some hypotheses. Long-range lasers allow researchers to digitize enormous surfaces (like fossil sites) down to a centimeter scale. The resulting “point cloud” shows where each skeleton sits in space. “When you're confronted with parts of the skeleton that are that large and also distributed over a very large area,” says Pyenson, “taking one photo doesn't really provide the kind of data you need to test the ideas you have. That's where creating an undistorted 3D model has its huge advantages.”First, based on geological evidence from the site, they ruled out one of the previous theories: mass stranding. The mudstone and carbonate sediment around the specimens indicates that the site had been deep underwater. The team also nixed the lethal volcanic eruption theory—they found no telltale signal like elevated levels of mercury in the rock.Then they tried to figure out what else might have been going on. They inferred from the location that although the water was deep, the site had not been too far from shore. Although Nevada is inland today, this park is thought to have once been a tropical gulf near an archipelago, or cluster of islands. “Archipelagos are really good environments if you want to be protected,” says Pyenson. “So that setting becomes really important when you consider the other clues that we collected.”The biggest clue the team noticed from analyzing fossils was their demographics. Each was either really big or really small—tiny enough to be an embryo or newborn. “We suspect that these are the remains of recently born or soon-to-be born ichthyosaurs from this time,” says Pyenson. “We find nothing else there.” The lack of adolescents signals that this was a group nursery or calving ground, and given the preponderance of bones, they believe these animals continued coming here for hundreds of thousands of years.Pyenson thinks this is evidence of separation between places where the animals ate and where they gave birth. “They travel vast distances to feed in one place and give birth in another,” he says. “It’s something that we see in today's large oceangoing predators, including large whales and sharks.” They also find no evidence of large prey that could support such big ichthyosaurs at this site.This behavioral overlap across such different species fascinates Pyenson. Sharks evolved long before ichthyosaurs. Whales evolved long after. But the same behavior keeps cropping up. (His team has created an interactive website if you want to explore their results.)Reproduction is understudied in the field, says Delsett, even though “it's the most central thing that animals do.” Delsett praises the thoroughness of the new study, and is already thinking about how to incorporate its lessons into her own work. She has been studying a site with about 30 ichthyosaurs in Spitsbergen, a Norwegian archipelago near the North Pole. “We have this one question we never answered: Why do we have so many ichthyosaurs in one site?” she says. “This is a good framework. I can use all of these lines of research.”Mark McMenamin, a paleontologist at Mount Holyoke College who was not involved with the study, agrees that the team’s laser scanning methodology is valuable for the field. However, he disagrees with their interpretation. McMenamin has long argued that the strange Nevada site is the work of a giant, ancient cephalopod—like a squid or octopus—who killed the shonisaurus and deliberately arranged their bones. McMenanin notes patterns in the skeletons: a partial tail with nothing else attached could be the aftermath of a kill; tiling or “tessellations” in the fossil bones could suggest deliberate manipulation.“They're out hunting in blue water, and there was something else out there that began hunting them,” he says. (This hypothesis, known as “Triassic Kraken,” has not been validated by peer review.)Pyenson rejects the kraken hypothesis outright. “I would charitably say it's an implausible and untestable hypothesis,” he says. “It assumes the existence of taxa and behaviors that we don't have any evidence for.” That solitary tail skeleton shows no evidence of predation, he says, like teeth marks. And the tessellations McMenamin noted may just look regular because the bones fell aligned like dominoes. “We think what we've put our finger on is a compelling biological explanation—that it's grouping behavior,” says Pyenson.Still, the two ideas illustrate that some details from 230 million years ago are just unknowable. For example, the new paper doesn’t explain how these ichthyosaurs actually died. We may never know, says Pyenson. The fossil site, he says, has been “massively deformed by tectonism and just that amount of geologic time—the bones aren't as well preserved.”Sometimes new evidence or methods can demystify fossils in surprising ways. Paleontologists used to think that shonisaurs were toothless filter-feeders like today’s large whales, eating the small prey slipped into their mouths. Then, they found adults with teeth. Shonisaurus may have even preyed on other ichthyosaurs. “We don't know. That's among the many mysteries,” says Pyenson. “We have a really hard time reconstructing that food web.”Unsatisfying answers are just part of the job, he continues: “We do our best, move the ball as far downfield as we can, and leave the rest for the next generation of scientists.” | Biology |
By Nathan Collins - Stanford UniversityStarting about 7,000 years ago, and extending over the next two millennia, recent studies suggest, the genetic diversity of men—specifically, the diversity of their Y chromosomes—collapsed. The collapse was so extreme it was as if there were only one man left to mate for every 17 women. Anthropologists and biologists were perplexed, but researchers now believe they’ve found a simple—if revealing—explanation. The collapse, they argue, was the result of generations of war between patrilineal clans, whose male ancestors determined membership.The outlines of that idea came to Tian Chen Zeng, an undergraduate in sociology at Stanford University, after spending hours reading blog posts that speculated— unconvincingly, Zeng thought—on the origins of the “Neolithic Y-chromosome bottleneck,” as the event is known. He soon shared his ideas with his high school classmate Alan Aw, also a Stanford undergraduate in mathematical and computational science.“He was really waxing lyrical about it,” Aw says, so the pair took their idea to biology professor Marcus Feldman. Zeng, Aw, and Feldman report their results in Nature Communications.It’s not unprecedented for human genetic diversity to take a nosedive once in a while, but the Y-chromosome bottleneck, which was inferred from genetic patterns in modern humans, was an odd one. First, it was observed only in men—more precisely, it was detected only through genes on the Y chromosome, which fathers pass to their sons. Second, the bottleneck is much more recent than other biologically similar events, hinting that its origins might have something to do with changing social structures.Certainly, the researchers point out, social structures were changing. After the onset of farming and herding around 12,000 years ago, societies grew increasingly organized around extended kinship groups, many of them patrilineal clans—a cultural fact with potentially significant biological consequences. The key is how clan members are related to each other. While women may have married into a clan, men in such clans are all related through male ancestors and therefore tend to have the same Y chromosomes. From the point of view of those chromosomes at least, it’s almost as if everyone in a clan has the same father.That only applies within one clan, however, and there could still be considerable variation between clans. To explain why even between-clan variation might have declined during the bottleneck, the researchers hypothesized that wars, if they repeatedly wiped out entire clans over time, would also wipe out a good many male lineages and their unique Y chromosomes in the process.To test their ideas, the researchers turned to mathematical models and computer simulations in which men fought—and died—for the resources their clans needed to survive. As the team expected, wars between patrilineal clans drastically reduced Y chromosome diversity over time, while conflict between non-patrilineal clans—groups where both men and women could move between clans—did not.Zeng, Aw, and Feldman’s model also accounted for the observation that among the male lineages that survived the Y-chromosome bottleneck, a few lineages underwent dramatic expansions, consistent with the patrilineal clan model, but not others.Now the researchers are looking at applying the framework in other areas—anywhere “historical and geographical patterns of cultural interactions could explain the patterns you see in genetics,” says Feldman.The Center for Computational, Evolutionary and Human Genomics, the Morrison Institute for Population and Resource Studies, and a grant from the National Science Foundation supported the work.Source: Stanford University via Futurity - Original Study DOI: 10.1038/s41467-018-04375-6 If you enjoy our selection of content please consider following Universal-Sci on social media: | Biology |
The hidden role of lipid droplets in fertility and beyond
Once thought of merely as fat storages, lipid droplets are now believed to play important roles in human health and fertility.
Within our cells are structures called lipid droplets that serve as storage units for energy in the form of lipids or fats. Because fat is an important energy source for cells and organisms, scientists had long assumed that lipid droplets had a straightforward role during egg production, as energy providers for the developing embryo.
In the past few years, however, scientists have found that lipid droplets play additional roles. Researchers at the University of Rochester and the University of Iowa recently discovered that lipid droplets play a vital role during the development of eggs in fruit flies. In a paper published in the journal Development, the researchers report that lipid droplets provide a signal that triggers cellular changes necessary for the growth of the egg—and could affect fertility in myriad organisms.
"We suspect this function is widespread and may also contribute to fertility in humans," says Michael Welte, a professor in the Department of Biology, who led the Rochester group.
The researchers found that during egg production, a signal is generated by an enzyme on the surface of the lipid droplets. The enzyme releases a specific type of lipid called arachidonic acid. The arachidonic acid is then converted into signaling molecules called prostaglandins. Prostaglandins stimulate the cell to rearrange its internal structure and prepare for the next stage of egg development.
When the enzyme on the lipid droplets is absent, prostaglandin production is affected, leading to defects in cell structure and impaired egg maturation.
Previous research has shown that prostaglandins have various other significant roles in human health, including regulating fever, pain, and inflammation.
"Many of us frequently use medication to suppress prostaglandin production, such as aspirin and ibuprofen," Welte says. In certain cells of the human immune system, there is a connection between lipid droplets and prostaglandin production.
But Welte's research shows that the connection between lipid droplets and prostaglandin production is not limited to immune cells. The research suggests lipid droplets may be instrumental for prostaglandin production in other tissues as well, making them key players in multiple biological processes.
"Lipid droplets support a critical step in producing an egg," he says, "and we now speculate that lipid droplets may influence even more processes in animal development by modulating prostaglandin signals."
More information: Michelle S. Giedt et al, Adipose triglyceride lipase promotes prostaglandin-dependent actin remodeling by regulating substrate release from lipid droplets, Development (2023). DOI: 10.1242/dev.201516
Journal information: Development
Provided by University of Rochester | Biology |
How do we increase food production by more than 50%, on a limited amount of arable land, to feed a projected 10 billion people by 2050? The solution could come in the form of nutritious and protein-dense microalgae (single-celled), grown in onshore, seawater-fed aquaculture systems. A paper, “Transforming the Future of Marine Aquaculture: A Circular Economy Approach,” published in the September issue of Oceanography, describes how growing algae onshore could close a projected gap in society’s future nutritional demands while also improving environmental sustainability. “We have an opportunity to grow food that is highly nutritious, fast-growing, and we can do it in environments where we’re not competing for other uses,” said Charles Greene, professor emeritus of earth and atmospheric sciences and the paper’s senior author. “And because we’re growing it in relatively enclosed and controlled facilities, we don’t have the same kind of environmental impacts.” Even as the Earth’s population grows in the coming decades, climate change, limited arable land, lack of freshwater and environmental degradation will all constrain the amount of food that can be grown, according to the paper. “We just can’t meet our goals with the way we currently produce food and our dependence on terrestrial agriculture,” Greene said. With wild fish stocks already heavily exploited, and with constraints on marine finfish, shellfish, and seaweed aquaculture in the coastal ocean, Greene and colleagues argue for growing algae in onshore aquaculture facilities. GIS-based models, developed by former Cornell graduate student, Celina Scott-Buechler ’18, M.S. ’21, predict yields based on annual sunlight, topography, and other environmental and logistical factors. The model results reveal that the best locations for onshore algae farming facilities lie along the coasts of the Global South, including desert environments. “Algae can actually become the breadbasket for the Global South,” Greene said. “In that narrow strip of land, we can produce more than all the protein that the world will need.” Along with high protein content, the researchers noted that algae provide nutrients lacking in vegetarian diets, such as essential amino acids and minerals found in meat and omega-3 fatty acids often sourced in fish and seafood. Algae, which grow 10 times faster than traditional crops, can be produced in a manner that is more efficient than agriculture in its use of nutrients. For example, when farmers add nitrogen and phosphorus fertilizers to grow terrestrial crops, about half runs off fields and pollutes waterways. With algae grown in enclosed facilities, excess nutrients can be captured and reused. Similarly, carbon dioxide must be added to aquaculture ponds to grow algae. Researchers and companies have been experimenting with adding algae to construction materials and cement, where the carbon gets sequestered and removed from the atmosphere. “If we use algae in these long-lived structural materials, then we have the potential to be carbon negative, and part of the solution to climate change,” Greene said. One challenge is that sourcing CO2 is currently expensive and energy inefficient, but engineers are experimenting with concentrated solar technologies that use mirrors to focus and concentrate sunlight to heat a working fluid, which in turn can be used in direct air capture technologies that capture carbon dioxide from the air. Also, while algae farming solves many food-related and environmental problems on paper, it can only be successful if people adopt it in diets and for other uses. Adding nutritious algae as a major ingredient or supplement in plant-based meats, which currently rely on less nutritious pea and soy, is one possibility. Co-author Xingen Lei, professor of animal science at Cornell, and other colleagues have found that when algae is added to chicken feed, hens lay eggs with triple the amount of omega-3 fatty acids as normal eggs. A follow-up perspectives piece that highlights and expands on the points of this paper, will appear in the October issue of PLoS Biology. Scott-Buechler, currently a doctoral student at Stanford, is a coauthor on both works. The study was supported by the U.S. Department of Energy and the U.S. Department of Agriculture, among others. | Biology |
A paralyzed man who hasn’t spoken in 15 years uses a brain-computer interface that decodes his intended speech, one word at a time.A computer screen shows the question “Would you like some water?” Underneath, three dots blink, followed by words that appear, one at a time: “No I am not thirsty.” It was brain activity that made those words materialize—the brain of a man who has not spoken for more than 15 years, ever since a stroke damaged the connection between his brain and the rest of his body, leaving him mostly paralyzed. He has used many other technologies to communicate; most recently, he used a pointer attached to his baseball cap to tap out words on a touchscreen, a method that was effective but slow. He volunteered for my research group’s clinical trial at the University of California, San Francisco in hopes of pioneering a faster method. So far, he has used the brain-to-text system only during research sessions, but he wants to help develop the technology into something that people like himself could use in their everyday lives. In our pilot study, we draped a thin, flexible electrode array over the surface of the volunteer’s brain. The electrodes recorded neural signals and sent them to a speech decoder, which translated the signals into the words the man intended to say. It was the first time a paralyzed person who couldn’t speak had used neurotechnology to broadcast whole words—not just letters—from the brain. That trial was the culmination of more than a decade of research on the underlying brain mechanisms that govern speech, and we’re enormously proud of what we’ve accomplished so far. But we’re just getting started. My lab at UCSF is working with colleagues around the world to make this technology safe, stable, and reliable enough for everyday use at home. We’re also working to improve the system’s performance so it will be worth the effort.
How neuroprosthetics workThe first version of the brain-computer interface gave the volunteer a vocabulary of 50 practical words. University of California, San Francisco Neuroprosthetics have come a long way in the past two decades. Prosthetic implants for hearing have advanced the furthest, with designs that interface with the cochlear nerve of the inner ear or directly into the auditory brain stem. There’s also considerable research on retinal and brain implants for vision, as well as efforts to give people with prosthetic hands a sense of touch. All of these sensory prosthetics take information from the outside world and convert it into electrical signals that feed into the brain’s processing centers. The opposite kind of neuroprosthetic records the electrical activity of the brain and converts it into signals that control something in the outside world, such as a robotic arm, a video-game controller, or a cursor on a computer screen. That last control modality has been used by groups such as the BrainGate consortium to enable paralyzed people to type words—sometimes one letter at a time, sometimes using an autocomplete function to speed up the process. For that typing-by-brain function, an implant is typically placed in the motor cortex, the part of the brain that controls movement. Then the user imagines certain physical actions to control a cursor that moves over a virtual keyboard. Another approach, pioneered by some of my collaborators in a 2021 paper, had one user imagine that he was holding a pen to paper and was writing letters, creating signals in the motor cortex that were translated into text. That approach set a new record for speed, enabling the volunteer to write about 18 words per minute. In my lab’s research, we’ve taken a more ambitious approach. Instead of decoding a user’s intent to move a cursor or a pen, we decode the intent to control the vocal tract, comprising dozens of muscles governing the larynx (commonly called the voice box), the tongue, and the lips.
The seemingly simple conversational setup for the paralyzed man [in pink shirt] is enabled by both sophisticated neurotech hardware and machine-learning systems that decode his brain signals. University of California, San Francisco I began working in this area more than 10 years ago. As a neurosurgeon, I would often see patients with severe injuries that left them unable to speak. To my surprise, in many cases the locations of brain injuries didn’t match up with the syndromes I learned about in medical school, and I realized that we still have a lot to learn about how language is processed in the brain. I decided to study the underlying neurobiology of language and, if possible, to develop a brain-machine interface (BMI) to restore communication for people who have lost it. In addition to my neurosurgical background, my team has expertise in linguistics, electrical engineering, computer science, bioengineering, and medicine. Our ongoing clinical trial is testing both hardware and software to explore the limits of our BMI and determine what kind of speech we can restore to people.
The muscles involved in speech Speech is one of the behaviors that sets humans apart. Plenty of other species vocalize, but only humans combine a set of sounds in myriad different ways to represent the world around them. It’s also an extraordinarily complicated motor act—some experts believe it’s the most complex motor action that people perform. Speaking is a product of modulated air flow through the vocal tract; with every utterance we shape the breath by creating audible vibrations in our laryngeal vocal folds and changing the shape of the lips, jaw, and tongue. Many of the muscles of the vocal tract are quite unlike the joint-based muscles such as those in the arms and legs, which can move in only a few prescribed ways. For example, the muscle that controls the lips is a sphincter, while the muscles that make up the tongue are governed more by hydraulics—the tongue is largely composed of a fixed volume of muscular tissue, so moving one part of the tongue changes its shape elsewhere. The physics governing the movements of such muscles is totally different from that of the biceps or hamstrings. Because there are so many muscles involved and they each have so many degrees of freedom, there’s essentially an infinite number of possible configurations. But when people speak, it turns out they use a relatively small set of core movements (which differ somewhat in different languages). For example, when English speakers make the “d” sound, they put their tongues behind their teeth; when they make the “k” sound, the backs of their tongues go up to touch the ceiling of the back of the mouth. Few people are conscious of the precise, complex, and coordinated muscle actions required to say the simplest word.
Team member David Moses looks at a readout of the patient’s brain waves [left screen] and a display of the decoding system’s activity [right screen].University of California, San Francisco My research group focuses on the parts of the brain’s motor cortex that send movement commands to the muscles of the face, throat, mouth, and tongue. Those brain regions are multitaskers: They manage muscle movements that produce speech and also the movements of those same muscles for swallowing, smiling, and kissing. Studying the neural activity of those regions in a useful way requires both spatial resolution on the scale of millimeters and temporal resolution on the scale of milliseconds. Historically, noninvasive imaging systems have been able to provide one or the other, but not both. When we started this research, we found remarkably little data on how brain activity patterns were associated with even the simplest components of speech: phonemes and syllables. Here we owe a debt of gratitude to our volunteers. At the UCSF epilepsy center, patients preparing for surgery typically have electrodes surgically placed over the surfaces of their brains for several days so we can map the regions involved when they have seizures. During those few days of wired-up downtime, many patients volunteer for neurological research experiments that make use of the electrode recordings from their brains. My group asked patients to let us study their patterns of neural activity while they spoke words. The hardware involved is called electrocorticography (ECoG). The electrodes in an ECoG system don’t penetrate the brain but lie on the surface of it. Our arrays can contain several hundred electrode sensors, each of which records from thousands of neurons. So far, we’ve used an array with 256 channels. Our goal in those early studies was to discover the patterns of cortical activity when people speak simple syllables. We asked volunteers to say specific sounds and words while we recorded their neural patterns and tracked the movements of their tongues and mouths. Sometimes we did so by having them wear colored face paint and using a computer-vision system to extract the kinematic gestures; other times we used an ultrasound machine positioned under the patients’ jaws to image their moving tongues.
The system starts with a flexible electrode array that’s draped over the patient’s brain to pick up signals from the motor cortex. The array specifically captures movement commands intended for the patient’s vocal tract. A port affixed to the skull guides the wires that go to the computer system, which decodes the brain signals and translates them into the words that the patient wants to say. His answers then appear on the display screen.Chris Philpot We used these systems to match neural patterns to movements of the vocal tract. At first we had a lot of questions about the neural code. One possibility was that neural activity encoded directions for particular muscles, and the brain essentially turned these muscles on and off as if pressing keys on a keyboard. Another idea was that the code determined the velocity of the muscle contractions. Yet another was that neural activity corresponded with coordinated patterns of muscle contractions used to produce a certain sound. (For example, to make the “aaah” sound, both the tongue and the jaw need to drop.) What we discovered was that there is a map of representations that controls different parts of the vocal tract, and that together the different brain areas combine in a coordinated manner to give rise to fluent speech.
The role of AI in today’s neurotech Our work depends on the advances in artificial intelligence over the past decade. We can feed the data we collected about both neural activity and the kinematics of speech into a neural network, then let the machine-learning algorithm find patterns in the associations between the two data sets. It was possible to make connections between neural activity and produced speech, and to use this model to produce computer-generated speech or text. But this technique couldn’t train an algorithm for paralyzed people because we’d lack half of the data: We’d have the neural patterns, but nothing about the corresponding muscle movements. The smarter way to use machine learning, we realized, was to break the problem into two steps. First, the decoder translates signals from the brain into intended movements of muscles in the vocal tract, then it translates those intended movements into synthesized speech or text. We call this a biomimetic approach because it copies biology; in the human body, neural activity is directly responsible for the vocal tract’s movements and is only indirectly responsible for the sounds produced. A big advantage of this approach comes in the training of the decoder for that second step of translating muscle movements into sounds. Because those relationships between vocal tract movements and sound are fairly universal, we were able to train the decoder on large data sets derived from people who weren’t paralyzed.
A clinical trial to test our speech neuroprosthetic The next big challenge was to bring the technology to the people who could really benefit from it. The National Institutes of Health (NIH) is funding our pilot trial, which began in 2021. We already have two paralyzed volunteers with implanted ECoG arrays, and we hope to enroll more in the coming years. The primary goal is to improve their communication, and we’re measuring performance in terms of words per minute. An average adult typing on a full keyboard can type 40 words per minute, with the fastest typists reaching speeds of more than 80 words per minute.
Edward Chang was inspired to develop a brain-to-speech system by the patients he encountered in his neurosurgery practice. Barbara Ries We think that tapping into the speech system can provide even better results. Human speech is much faster than typing: An English speaker can easily say 150 words in a minute. We’d like to enable paralyzed people to communicate at a rate of 100 words per minute. We have a lot of work to do to reach that goal, but we think our approach makes it a feasible target. The implant procedure is routine. First the surgeon removes a small portion of the skull; next, the flexible ECoG array is gently placed across the surface of the cortex. Then a small port is fixed to the skull bone and exits through a separate opening in the scalp. We currently need that port, which attaches to external wires to transmit data from the electrodes, but we hope to make the system wireless in the future. We’ve considered using penetrating microelectrodes, because they can record from smaller neural populations and may therefore provide more detail about neural activity. But the current hardware isn’t as robust and safe as ECoG for clinical applications, especially over many years. Another consideration is that penetrating electrodes typically require daily recalibration to turn the neural signals into clear commands, and research on neural devices has shown that speed of setup and performance reliability are key to getting people to use the technology. That’s why we’ve prioritized stability in creating a “plug and play” system for long-term use. We conducted a study looking at the variability of a volunteer’s neural signals over time and found that the decoder performed better if it used data patterns across multiple sessions and multiple days. In machine-learning terms, we say that the decoder’s “weights” carried over, creating consolidated neural signals.
University of California, San Francisco Because our paralyzed volunteers can’t speak while we watch their brain patterns, we asked our first volunteer to try two different approaches. He started with a list of 50 words that are handy for daily life, such as “hungry,” “thirsty,” “please,” “help,” and “computer.” During 48 sessions over several months, we sometimes asked him to just imagine saying each of the words on the list, and sometimes asked him to overtly try to say them. We found that attempts to speak generated clearer brain signals and were sufficient to train the decoding algorithm. Then the volunteer could use those words from the list to generate sentences of his own choosing, such as “No I am not thirsty.” We’re now pushing to expand to a broader vocabulary. To make that work, we need to continue to improve the current algorithms and interfaces, but I am confident those improvements will happen in the coming months and years. Now that the proof of principle has been established, the goal is optimization. We can focus on making our system faster, more accurate, and—most important— safer and more reliable. Things should move quickly now. Probably the biggest breakthroughs will come if we can get a better understanding of the brain systems we’re trying to decode, and how paralysis alters their activity. We’ve come to realize that the neural patterns of a paralyzed person who can’t send commands to the muscles of their vocal tract are very different from those of an epilepsy patient who can. We’re attempting an ambitious feat of BMI engineering while there is still lots to learn about the underlying neuroscience. We believe it will all come together to give our patients their voices back. | Biology |
In a new step for Crispr, scientists have used the gene-editing tool to make personalized modifications to cancer patients’ immune cells to supercharge them against their tumors. In a small study published today in the journal Nature, a US team showed that the approach was feasible and safe, but was successful only in a handful of patients.Cancer arises when cells acquire genetic mutations and divide uncontrollably. Every cancer is driven by a unique set of mutations, and each person has immune cells with receptors that can recognize these mutations and differentiate cancer cells from normal ones. But patients don’t often have enough immune cells with these receptors in order to mount an effective response against their cancer. In this Phase 1 trial, researchers identified each patient’s receptors, inserted them into immune cells lacking them, and grew more of these modified cells. Then, the bolstered immune cells were unleashed into each patient’s bloodstream to attack their tumor.“What we’re trying to do is really harness every patient’s tumor-specific mutations,” says Stefanie Mandl, chief scientific officer at Pact Pharma and an author on the study. The company worked with experts from the University of California, Los Angeles, the California Institute of Technology, and the nonprofit Institute for Systems Biology in Seattle to design the personalized therapies.The researchers began by separating T cells from the blood of 16 patients with solid tumors, including colon, breast, or lung cancer. (T cells are the immune system component with these receptors.) For each patient, they identified dozens of receptors capable of binding to cancer cells taken from their own tumors. The team chose up to three receptors for each patient, and using Crispr, added the genes for these receptors to the person’s T cells in the lab.Scientists grew more of the edited cells, enough to constitute what they hoped would be a therapeutic dose. Then they infused the edited cells back into each of the volunteers, who had all previously been treated with several rounds of chemotherapy. The edited T cells traveled to the tumors and infiltrated them.In six of the patients, the experimental therapy froze the growth of the tumors. In the other 11 people, their cancer advanced. Two had side effects related to the edited T cell therapy—one had fevers and chills, and the other one experienced confusion. Everyone in the trial had expected side effects from the chemotherapy.Mandl suspects the response to the therapy was limited because the patients’ cancers were already very advanced by the time they enrolled in the trial. Also, later tests revealed that some of the receptors the team chose could find the tumor, but didn’t have potent anticancer effects.Bruce Levine, a professor of cancer gene therapy at the University of Pennsylvania, says the ability to rapidly identify patients’ unique cancer receptors and generate tailored treatments using them is impressive. But the challenge will be in picking the right ones that actually kill cancer cells. “The fact that you can get those T cells into a tumor is one thing. But if they get there and don’t do anything, that’s disappointing,” he says.Solid tumors have also proven more difficult to treat with T cells than liquid tumors, or blood cancers, which include leukemia, lymphoma, and myeloma. Therapies that use traditional genetic engineering (rather than Crispr) to modify patients’ T cells have been approved for blood cancers, but they don’t work well on solid tumors.“As soon as the cancer gets complicated and develops its own architecture and a microenvironment and all sorts of defense mechanisms, then it becomes harder for the immune system to tackle it,” says Waseem Qasim, professor of cell and gene therapy at the Great Ormond Street Institute of Child Health at University College London.While the results of the study were limited, researchers hope to find a way to use Crispr against cancer, because the disease demands new treatments. Chemotherapy and radiation are effective for many patients, but they kill healthy cells as well as cancerous ones. Tailored therapies may offer a way to selectively target a patient’s unique set of cancer mutations and kill only those cells. Plus, some patients don’t respond to traditional therapies, or their cancer comes back later.But it’s still early days for Crispr cancer research. In a study at the University of Pennsylvania that Levine coauthored, three patients—two with blood cancer and the third with bone cancer—were treated with their own Crispr-edited T cells. Investigators had removed three genes from those cells to make them better at battling cancer. A preliminary study showed that the edited cells migrated to the tumor and survived after infusion, but the Penn team hasn’t published findings on how the patients fared after the treatment.Meanwhile, Qasim’s team in London has treated six children who were seriously ill with leukemia, using Crispr-edited T cells from donors. Four of the six went into remission after a month, which allowed them to receive a stem cell transplant, according to a study published recently in the journal Science. Of those four, two remain in remission nine months and 18 months after treatment, respectively, while two relapsed following their stem cell transplant.While there’s still much to learn about how to improve these treatments, researchers like Qasim hope that new technologies like Crispr will ultimately yield a better match between therapy and patient. “There’s no one-size-fits-all treatment for cancer,” says Qasim. “What these kinds of studies hope to demonstrate is that each tumor is different. It’s a guided missile type of treatment, rather than a big blast approach.” | Biology |
In September 1993, during a beach clean on the Isle of Man, Richard Thompson noticed thousands of multicoloured fragments at his feet, looking like sand. While his colleagues filled sacks with crisp packets, fishing rope, plastic bags and bottles, Thompson became transfixed by the particles.
They were so tiny that they did not fit any category in the spreadsheet where volunteers recorded their findings. “Yet it was pretty clear to me that the most abundant item on the beach was the smallest stuff,” Thompson says.
Over the next 10 years, after completing a PhD and going on to teach marine biology at Newcastle, Southampton and Plymouth universities, Prof Thompson spent his spare time beach-hopping, often enlisting students to help him gather dozens of sand samples in tinfoil trays.
Back in the lab, they would confirm what Thompson had first suspected: the particles were all pieces of plastic, no larger than grains of sand, and ubiquitous along the UK coastline. It was pollution on a whole new scale.
“I started studying marine biology because it was going to be all about turtles, dolphins, and coral reefs,” he says. Instead, those minuscule particles became his main fascination.
In a short study in 2004, co-authored with Prof Andrea Russell at Southampton University, Thompson first described the particles as “microplastics”. He hypothesised that as plastic entered the sea, it slowly fragmented into small but persistent pieces that spread even farther afield. He did not expect much reaction from his modest one-page article.
“It had been a May bank holiday weekend, and we’d been away camping. I came back in and every email that morning was from a journalist, and the phone was ringing continuously.”
The story was picked up instantly by networks in the UK, Europe and Asia. “Shortly after it was published, it was being discussed in the Canadian parliament,” says Russell, whose experiments had confirmed that the particles were plastic.
The discovery helped spawn an entire field of microplastics research, and would be instrumental in plastic bag taxes and bans on plastic microbeads in rinse-off cosmetics in countries including the US, New Zealand and Canada.
Researchers now look at even tinier fragments called nanoplastics that infiltrate our blood, wombs and breastmilk. In some parts of the world, people consume a credit card’s worth of plastic this way each week.
As for Thompson, he would go on to be named the “godfather of microplastics” by a British politician, establish the International Marine Litter Research Unit at Plymouth and become a frequent guest at the House of Commons to discuss the dangers of marine litter.
Most recently, he has been catapulted into the heart of international negotiations to draw up a global treaty to curb plastic pollution, which had its most recent talks in Paris in June.
A UN-led plastics treaty is a “once-in-a-planet opportunity”, Thompson says, but on some of the supposed solutions being put forth, he is very clear. He is adamant that biodegradable plastic cannot save us. Neither can any amount of “cleanups”, like his own fateful expedition in 1993.
What’s worse, he thinks, is that if the plastics treaty sends the world chasing the wrong ideas, microplastics pollution will only worsen. “There’s a real risk that worries me,” he says. “That if we guess at this, we’ll get it wrong.”
A tall figure who speaks with an infectious enthusiasm despite the subject of his work, Thompson often starts conversations by checking his watch – knowing that once he gets started on plastics, time can quickly run away.
He describes his discovery as the result of a solution gone awry. Plastic was invented as a sustainable alternative to ivory, then became indispensable in fields such as engineering and medicine.
But the problem began in the 1950s when the industry’s ambitions turned to single-use packaging, which now accounts for 40% of the more than 400m tonnes of plastic produced each year – at least 8m tonnes of which finds its way to the ocean. Meanwhile, production is only increasing.
The other side to this coin is plastic’s persistence in nature. Thompson’s hypothesis was correct: microplastics result from the lengthy breakdown of larger items, and they will linger for decades more thanks to plastic’s inherent durability, all the while absorbing harmful toxins and pathogens that end up in the bodies of marine animals.
It was known that plastic waste floated in the ocean, but it was not until Thompson gave the tiny versions a name that the world finally recognised the scale of this new pollution.
“I saw [Thompson’s 2004] paper and said: ‘This is really important. Maybe people are going to wake up to the widespread abundance of plastics in the ocean,’” says Edward Carpenter, a retired marine scientist who was the first to describe floating plastic fragments on the surface of the Sargasso Sea in 1972 – particles that are probably still at sea to this day.
Thompson went on to provide the first evidence that sea creatures ingest these particles. He also showed their global distribution, including in the Arctic and in every sample of sand taken from dozens of beaches worldwide.
The plastics treaty would be a shot at stemming this flow, he notes. Much depends on the treaty’s scope, on questions such as whether it should ban some types of plastic, or regulate the array of 13,000 chemicals in everyday packaging.
What concerns Thompson is that policymakers may be led astray by much-hyped approaches that are already being used – for instance, hi-tech initiatives to remove plastic from the sea, such as the Ocean Cleanup.
Thompson stresses that he is an ardent believer in cleanups for coastal pollution, but that it is “naive to expect that [cleanups] can be a systemic solution” to the vast threat of microplastics.
“The psychologists would call it ‘techno-optimism’,” he says. “If we’re not careful, the public gets convinced that a big gadget whizzing around in the middle of the Pacific gyre is going to mop it all up for us, and that’s the end of the story.
“It’s an attractive story – from the point of view of not having to change anything we do.”
Similarly, the proliferation of plastic alternatives such as biodegradable and bio-based plastics. “I was really curious, was this going to be an answer to the problem?” Thompson says.
But, although they are a partial improvement on the fossil-fuel footprint of conventional plastics, and may have some legitimate uses, most biodegradable plastics do not melt away into nature – a fact Thompson first realised when, early in his research career, a trawl hauled a bag up from the bottom of the North Sea. On its side was printed “biodegradable”. “I’ve still got it here somewhere!” he says, gesturing behind him at stacked shelves loaded with folders.
We now know – again, thanks to experiments by Thompson and colleagues – that many biodegradables need controlled industrial conditions to degrade, and can take years to disappear in soil and sea.
“If we keep the nearly 300-400m tonnes of plastic we’re making every year, and all we’re doing is chucking biosource plastics [which are biodegradable] to fill the gap, it doesn’t fix the problem of litter, it doesn’t fix the problem of waste, it doesn’t fix the problem of chemicals,” he says. “It’s just substituting the carbon source.”
None of these kinds of actions change what he thinks is the real danger: the linear relationship we have with plastic – produce, consume, dispose – which created the problem. After two decades describing that problem, he is now focused on the cause. “It’s very much coming back to the land, my research, because the problem isn’t made in the ocean: it’s made by practices on land.”
He said as much at the Paris talks. “Let’s turn to the solutions, which lie upstream,” he told an audience of delegates from 58 countries, explaining that in order to slow the flowing river of plastic, we first need to narrow its source. “We can’t carry on [producing] at the rate we are. It’s overwhelming any ability to cope with it.”
He agrees with new calls among treaty negotiators to curb “unnecessary, avoidable or problematic” plastics, which could include the deluge of single-use items. “I mean, surely we want to buy the product, not the packaging it’s in,” Thompson says. “I think that’s a key place to start: the most important [type of plastic] accumulating in the oceans and escaping waste-management systems is packaging.”
But while items sheathed in Russian-doll-like layers of plastic are obvious candidates for cuts, certain plastics do bring legitimate value to our lives and are likely to remain with us, Thompson says. “I’m not saying we can carry on with business as usual. Reduce has to be the first action,” he stresses – but for the plastic that remains in use, he believes the challenge is to redesign it.
Just 10% of plastic is recycled globally, a staggeringly low figure that is partly due to the thousands of chemicals that give plastic its diverse qualities, colours and forms and make it almost impossible to remix.
“We do a really bad job of designing stuff for circularity. So when people say that it’s clearly failed because we’re only recycling 10%, I think the root cause of the error is at the design stage,” Thompson says.
“When I talk to product designers, they say they were asked to design a product that was attractive – they weren’t asked to consider end-of-life.”
Like some other scientists, he believes chemical additives need to be reduced in the plastic manufacturing process, with the bonus of making them safer to use. He cites PET bottles as a good example of how simpler construction makes it possible to recycle some products up to 10 times.
Redesign can also soften the impact of plastic during its lifecycle. Take the problem of polymer-rich fabrics that shed plastic microfibres into the sea. Several countries now require filters on washing machines to capture these threads.
Yet Thompson and his team have found that half the shedding happens, not during washing, but while people are wearing clothes. Redesigning fabric for longer wear reduces shedding by a striking 80%. “So the systemic answer would work for the planet,” he says. His latest work is examining other design challenges such as car tyres, a primary source of marine microplastics.
Growing scientific consensus on these and other issues could soon be crucial in guiding nations towards solutions, so as a scientist, Thompson is frustrated that there is no UN-level mechanism to communicate the most up-to-date plastics research to governments.
In its absence, he helped establish the Scientists Coalition for an Effective Plastics Treaty, an independent, voluntary group of 200 multidisciplinary researchers from 40 countries who are filling the gap by providing scientific advice to treaty negotiators.
“Scientific evidence has brought us to this point. We’re going to need scientific evidence to go forward in the right way,” he says.
How did he feel in June witnessing nations come together to agree on the need to ban or regulate microplastics – knowing he was at least a part of the reason they were all there? Thompson takes a rare pause in our conversation. “That’s actually making me quite emotional to think about,” he says.
Weeks later, he elaborates. “A whole body of evidence brought us to where we are now with the UN treaty, and even the discussions about microplastics,” he says. “But it was quite a moving moment from a personal perspective, that I felt you could draw a line right back to that paper in 2004.” | Biology |
Scientists may have uncovered the brain's "on switch" for male libido, a new study in mice suggests.
The switch is actually more of a circuit, consisting of a group of neurons that connects multiple regions of the brain. The newly identified circuit not only helps male mice recognize females but also controls their desire to have sex with them, complete the act itself and experience pleasure as a result, the study suggests.
The authors say the findings, published Friday (Aug. 11) in the journal Cell, may improve our understanding of how sexual behaviors are regulated differently in male and female mammals. Someday, the work could lead to the development of new drugs to treat low libido, which is estimated to affect 1 in 5 men at some point in their lives. Although maintaining a high sex drive is not essential for one's health, the sudden, unexpected loss of libido can be distressing to both individuals and their sexual partners.
"If these centers exist in humans — and now we know where to look — it should be possible to design small molecules that can be used to regulate these circuits," lead study author Nirao Shah, a professor of psychiatry and neurobiology at Stanford University, said in a statement.
Related: Why do guys get sleepy after sex?
In the new study, researchers looked at the brains of adult virgin male mice who hadn't seen a female mouse since they were weaned. Previous research by the group had pinpointed a group of neurons that regulate whether male mice can identify the sex of female mice on sight. These neurons connect two regions of the brain: the bed nucleus of the stria terminalis (BNST) in the amygdala, a key brain region for emotional processing, and the preoptic area of the hypothalamus (POA). Having revealed the importance of these cells, the team decided to explore exactly how they communicate.
The team found that certain neurons in the BNST secrete a chemical known as substance P. The small molecule binds to receptors on specific POA neurons, which subsequently activate and shoot messages to regions of the brain that respectively regulate movement and the experience and anticipation of feelings of pleasure associated with sex.
When the researchers stimulated these BNST neurons in male lab mice, the rodents' POA neurons became increasingly more active and they began having sex with the female mice after a 10-to-15-minute delay. Directly supplying substance P to the mice's POA caused their sex drive to increase so much that they attempted to mate with inanimate objects.
What's more, by stimulating the rodents' POA neurons, the researchers dramatically cut the break time male mice normally take between rounds of sex. Instead of taking about five days to recover their sex drive after ejaculation, the mice needed only a second or less to be ready to have sex again. Conversely, when the team stopped these POA neurons from working, the mice lost all interest in mating.
Shah's team is still working to identify a similar circuit in female mice, but in the meantime, they're confident that a human equivalent of the male mouse libido circuit will be identified. "It's very likely there are similar sets of neurons in the human hypothalamus that regulate sexual reward, behavior and gratification," he said in the statement.
Shah added that any future drugs that target this circuit would work very differently to the likes of Viagra, which treats erectile dysfunction by increasing blood flow to the penis. Instead, the drugs would directly amplify or reduce the activity of a specific area in the brain that controls male sexual desire, he said. This could help treat either low or high sex drives by ramping up or turning down the circuit's activity.
However, such drugs are a long way from reaching the market.
"Regulating libido is enormously complex in humans with lots of social, political, ethical and other considerations that need to be addressed prior to thinking about any such approach," Shah told HealthDay.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Emily is a health news writer based in London, United Kingdom. She holds a bachelor's degree in biology from Durham University and a master's degree in clinical and therapeutic neuroscience from Oxford University. She has worked in science communication, medical writing and as a local news reporter while undertaking journalism training. In 2018, she was named one of MHP Communications' 30 journalists to watch under 30. ([email protected]) | Biology |
Tardigrades — those darling, near-microscopic critters that are nearly indestructible — carry proteins that could keep critical drugs and medical treatments stable without refrigeration, scientists say.
In a study published Monday (March 20) in the journal Scientific Reports (opens in new tab), scientists tested this idea with human blood clotting factor VIII, a protein used to treat an inherited bleeding disorder called hemophilia A. Due to a genetic mutation, people with this disorder don't make enough factor VIII (opens in new tab) and their blood can't clot properly. People with hemophilia A bleed spontaneously, and bleed excessively after injury or surgery.
Treating hemophilia A usually involves injecting factor VIII into the body, to make up for the patient's deficiency. Many factor VIII products require refrigeration (opens in new tab), and those that don't can typically be kept at room temperature for only a limited amount of time and within a narrow temperature range.
Tardigrades, on the other hand, have a remarkable ability called anhydrobiosis, where they essentially dry themselves out and enter a state of suspended animation. In this state, the so-called water bears can withstand temperatures as low as minus 328 degrees Fahrenheit (minus 200 degrees Celsius) and as high as 300 F (148.9 C).
The study authors wanted to see if the tardigrade's remarkable resilience could be carried over to medical treatments.
"Our work provides a proof of principle that we can stabilize Factor VIII, and likely many other pharmaceuticals, in a stable, dry state at room or even elevated temperatures using proteins from tardigrades," senior study author Thomas Boothby (opens in new tab), an assistant professor of molecular biology at the University of Wyoming, said in a statement (opens in new tab). "And, thus, provide critical live-saving medicine to everyone everywhere."
The team drew two substances from the tardigrade Hypsibius exemplaris: a sugar called trehalose and a protein called cytoplasmic abundant heat soluble (CAHS) D. Both substances help preserve the tardigrades' bodies during anhydrobiosis so that they survive to be "rehydrated" later on.
The team tweaked both substances' biophysical properties to boost their ability to stabilize factor VIII. They then used the substances to store factor VIII without refrigeration and under unfavorable conditions, such as repeated dehydration and rehydration, extreme heat and long-term dry storage. Both compounds worked, but CAHS D worked better than trehalose, the team noted.
The authors think this approach could potentially be used for other medicines that currently require refrigeration. But we're still in the early days of this research.
"This will not only be beneficial for global health initiatives in remote or developing parts of the world, but also for fostering a safe and productive space economy which will be reliant on new technologies that break our dependance on the cold-chain for the storage of medicine, food, and other biomolecules," the authors wrote. | Biology |
Octopuses shown to map their visual landscape much like humans do
An octopus devotes about 70% of its brain to vision. But until recently, scientists have only had a murky understanding of how these marine animals see their underwater world. A new University of Oregon study brings the octopus's view into focus.
For the first time, neuroscientists have recorded neural activity from the visual system of an octopus. They've created a map of the octopus's visual field by directly observing neural activity in the animal's brain in response to light and dark spots in different locations.
This map of the neural activity in the octopus visual system looks a lot like what's seen in the human brain—even though octopuses and humans last shared a common ancestor some 500 million years ago, and octopuses have evolved their complex nervous systems independently.
UO neuroscientist Cristopher Niell and his team report their findings in a paper published June 20 in Current Biology.
"Nobody has actually recorded from the central visual system of a cephalopod before," Niell said. Octopuses and other cephalopods aren't typically used as a model for understanding vision, but Niell's team is fascinated by their unusual brains. In a related paper published last year in Current Biology, the lab identified different categories of neurons in the octopus optic lobe, the part of the brain dedicated to vision. Together, "these papers provide a nice foundation by elucidating the different types of neurons and what they respond to—two essential aspects we'd want to know to start understanding a novel visual system," Niell said.
In the new study, researchers measured how neurons in the octopus visual system responded to dark and light spots moving across a screen. Using fluorescent microscopy, the researchers could watch the activity of neurons as they responded, to see how neurons reacted differently depending on where the spots appeared.
"We were able to see that each location in the optic lobe responded to one location on the screen in front of the animal," Niell said. "If we moved a spot over, the response moved over in the brain."
This kind of one-to-one map is found in the human brain for multiple senses, like vision and touch. Neuroscientists have connected the location of particular sensations to specific spots in the brain. A well-known representation for touch is the homunculus, a cartoon human figure in which body parts are drawn in proportion with how much brain space is devoted to processing sensory input there. Highly sensitive spots like fingers and toes appear massive because there are lots of brain inputs from these body parts, while less sensitive areas are much smaller.
But finding an orderly linkage between the visual scene and the octopus brain was far from a given. It's a fairly complex evolutionary innovation, and some animals such as reptiles don't have that kind of map. Also, previous studies had suggested the that octopuses do not have a homunculus-like map for different parts of their body.
"We hoped that the visual map might be there, but nobody had directly observed it before," Niell said.
The octopus neurons also responded particularly strongly to small light spots and big dark spots, the researchers noted—a distinct difference from the human visual system. Niell's team hypothesizes that this might be due to the specific characteristics of the underwater environment that octopuses must navigate. Looming predators might appear as large dark shadows, while close-up objects like food would appear as small bright spots.
Next, the researchers hope to understand how the octopus brain responds to more complex images, such as those actually encountered in their natural environment. Their eventual goal is to trace the path of these visual inputs deeper into the octopus brain, to understand how the octopus sees and interacts with its world.
More information: Judit R. Pungor et al, Functional organization of visual responses in the octopus optic lobe, Current Biology (2023). DOI: 10.1016/j.cub.2023.05.069
Journal information: Current Biology
Provided by University of Oregon | Biology |
Using associative learning, in some ways a pigeon’s peck can mirror high tech.
A University of Iowa study discovered that pigeons and artificial intelligence share a similar learning process called associative learning. This method, which involves making connections between objects or patterns, allows both pigeons and AI to excel at certain tasks, challenging the notion that it is a rigid and unsophisticated form of learning.
Can a pigeon match wits with artificial intelligence? At a very basic level, yes.
In a new study, psychologists at the University of Iowa examined the workings of the pigeon brain and how the “brute force” of the bird’s learning shares similarities with artificial intelligence.
The researchers gave the pigeons complex categorization tests that high-level thinking, such as using logic or reasoning, would not aid in solving. Instead, the pigeons, by virtue of exhaustive trial and error, eventually were able to memorize enough scenarios in the test to reach nearly 70% accuracy.
The researchers equate the pigeons’ repetitive, trial-and-error approach to artificial intelligence. Computers employ the same basic methodology, the researchers contend, being “taught” how to identify patterns and objects easily recognized by humans. Granted, computers, because of their enormous memory and storage power—and growing ever more powerful in those domains—far surpass anything the pigeon brain can conjure.
Still, the basic process of making associations—considered a lower-level thinking technique—is the same between the test-taking pigeons and the latest AI advances.
“You hear all the time about the wonders of AI, all the amazing things that it can do,” says Ed Wasserman, Stuit Professor of Experimental Psychology in the Department of Psychological and Brain Sciences at Iowa and the study’s corresponding author. “It can beat the pants off people playing chess, or at any video game, for that matter. It can beat us at all kinds of things. How does it do it? Is it smart? No, it’s using the same system or an equivalent system to what the pigeon is using here.”
The researchers sought to tease out two types of learning: one, declarative learning, is predicated on exercising reason based on a set of rules or strategies—a so-called higher level of learning attributed mostly to people. The other, associative learning, centers on recognizing and making connections between objects or patterns, such as, say, “sky-blue” and “water-wet.”
Numerous animal species use associative learning, but only a select few—dolphins and chimpanzees among them—are thought to be capable of declarative learning.
Yet AI is all the rage, with computers, robots, surveillance systems, and so many other technologies seemingly “thinking” like humans. But is that really the case, or is AI simply a product of cunning human inputs? Or, as the study’s authors put it, have we shortchanged the power of associative learning in human and animal cognition?
Wasserman’s team devised a “diabolically difficult” test, as he calls it, to find out.
Each test pigeon was shown a stimulus and had to decide, by pecking a button on the right or on the left, to which category that stimulus belonged. The categories included line width, line angle, concentric rings, and sectioned rings. A correct answer yielded a tasty pellet; an incorrect response yielded nothing. What made the test so demanding, Wasserman says, is its arbitrariness: No rules or logic would help decipher the task.
“These stimuli are special. They don’t look like one another, and they’re never repeated,” says Wasserman, who has studied pigeon intelligence for five decades. “You have to memorize the individual stimuli or regions from where the stimuli occur in order to do the task.”
Each of the four test pigeons began by correctly answering about half the time. But over hundreds of tests, the quartet eventually upped their score to an average of 68% right.
“The pigeons are like AI masters,” Wasserman says. “They’re using a biological algorithm, the one that nature has given them, whereas the computer is using an artificial algorithm that humans gave them.”
The common denominator is that AI and pigeons both employ associative learning, and yet that base-level thinking is what allowed the pigeons to ultimately score successfully. If people were to take the same test, Wasserman says, they’d score poorly and would probably give up.
“The goal was to see to what extent a simple associative mechanism was capable of solving a task that would trouble us because people rely so heavily on rules or strategies,” Wasserman adds. “In this case, those rules would get in the way of learning. The pigeon never goes through that process. It doesn’t have that high-level thinking process. But it doesn’t get in the way of their learning. In fact, in some ways it facilitates it.”
Wasserman sees a paradox in how associative learning is viewed.
“People are wowed by AI doing amazing things using a learning algorithm much like the pigeon,” he says, “yet when people talk about associative learning in humans and animals, it is discounted as rigid and unsophisticated.”
The study, “Resolving the associative learning paradox by category learning in pigeons,” was published online on February 7 in the journal Current Biology.
Reference: “Resolving the associative learning paradox by category learning in pigeons” by Edward A. Wasserman, Andrew G. Kain and Ellen M. O’Donoghue, 7 February 2023, Current Biology.
DOI: 10.1016/j.cub.2023.01.024
Study co-authors include Drew Kain, who graduated with a neuroscience degree from Iowa in 2022 and is pursuing a doctorate in neuroscience at Iowa; and Ellen O’Donoghue, who earned a doctorate in psychology at Iowa last year and is now a postdoctoral scholar at Cardiff University.
The National Institutes of Health funded the research. | Biology |
Ultrafast lasers on ultra-tiny chips
Lasers have become relatively commonplace in everyday life, but they have many uses outside of providing light shows at raves and scanning barcodes on groceries. Lasers are also of great importance in telecommunications and computing as well as biology, chemistry, and physics research.
In those latter applications, lasers that can emit extremely short pulses—those on the order of one-trillionth of a second (one picosecond) or shorter—are especially useful. Using lasers operating on such small timescales, researchers can study physical and chemical phenomena that occur extremely quickly—for example, the making or breaking of molecular bonds in a chemical reaction or the movement of electrons within materials.
These ultrashort pulses are also extensively used for imaging applications because they can have extremely large peak intensities but low average power, so they avoid heating or even burning up samples such as biological tissues.
In a paper appearing in the journal Science, Caltech's Alireza Marandi, an assistant professor of electrical engineering and applied physics, describes a new method developed by his lab for making this kind of laser, known as a mode-locked laser, on a photonic chip. The lasers are made using nanoscale components (a nanometer is one-billionth of a meter), allowing them to be integrated into light-based circuits similar to the electricity-based integrated circuits found in modern electronics.
"We're not just interested in making mode-locked lasers more compact," Marandi says. "We are excited about making a well-performing mode-locked laser on a nanophotonic chip and combining it with other components. That's when we can build a complete ultrafast photonic system in an integrated circuit. This will bring the wealth of ultrafast science and technology, currently belonging to meter-scale experiments, to millimeter-scale chips."
Ultrafast lasers of this sort are so important to research that this year's Nobel Prize in Physics was awarded to a trio of scientists for the development of lasers that produce attosecond pulses (one attosecond is one-quintillionth of a second). Such lasers, however, are currently extremely expensive and bulky, says Marandi—who notes that his research is exploring methods to achieve such timescales on chips that can be orders of magnitude cheaper and smaller, with the aim of developing affordable and deployable ultrafast photonic technologies.
"These attosecond experiments are done almost exclusively with ultrafast mode-locked lasers," he says. "And some of them can cost as much as $10 million, with a good chunk of that cost being the mode-locked laser. We are really excited to think about how we can replicate those experiments and functionalities in nanophotonics."
At the heart of the nanophotonic mode-locked laser developed by Marandi's lab is lithium niobate, a synthetic salt with unique optical and electrical properties that—in this case—allow the laser pulses to be controlled and shaped through the application of an external radio-frequency electrical signal. This approach is known as active mode-locking with intracavity phase modulation.
"About 50 years ago, researchers used intracavity phase modulation in tabletop experiments to make mode-locked lasers and decided that it was not a great fit compared to other techniques," says Qiushi Guo, the first author of the paper and a former postdoctoral scholar in Marandi's lab. "But we found it to be a great fit for our integrated platform."
"Beyond its compact size, our laser also exhibits a range of intriguing properties. For example, we can precisely tune the repetition frequency of the output pulses in a wide range. We can leverage this to develop chip-scale stabilized frequency comb sources, which are vital for frequency metrology and precision sensing," adds Guo, who is now an assistant professor at the City University of New York Advanced Science Research Center.
Marandi says he aims to continue improving this technology so it can operate at even shorter timescales and higher peak powers, with a goal of 50 femtoseconds (a femtosecond is one-quadrillionth of a second), which would be a 100-fold improvement over his current device, which generates pulses 4.8 picoseconds in length.
Paper co-authors are Benjamin K. Gutierrez, graduate student in applied physics; electrical engineering graduate students Ryoto Sekine, Robert M. Gray, James A. Williams, Selina Zhou, and Mingchen Liu; Luis Ledezma, an external affiliate in electrical engineering; Luis Costa, formerly at Caltech and now with JPL, which Caltech manages for NASA; and Arkadev Roy, formerly of Caltech and now with UC Berkeley.
More information: Qiushi Guo et al, Ultrafast mode-locked laser in nanophotonic lithium niobate, Science (2023). DOI: 10.1126/science.adj5438 | Biology |
Introduction
A tree has something in common with the weeds and mushrooms growing around its roots, the squirrels scurrying up its trunk, the birds perched on its branches, and the photographer taking pictures of the scene. They all have genomes and cellular machinery neatly packed into membrane-bound compartments, an organizational system that places them in an immensely successful group of life forms called eukaryotes.
The early history of eukaryotes has long fascinated scientists who yearn to understand when modern life started and how it evolved. But tracing the earliest eukaryotes back through Earth’s history has been difficult. Limited fossil data shows that their first ancestor appeared at least 1.6 billion years ago. Yet other telltale proofs of their existence are missing. Eukaryotes should produce and leave behind certain distinctive molecules, but fossilized versions of those molecules don’t show up in the rock record until 800 million years ago. This unexplained 800-million-year gap in early eukaryotic history, a crucial period when the last common ancestor of all of today’s complex life first arose, has shrouded the story of early life in mystery.
“There’s this massive temporal gap between the fossil record of what we think are the earliest eukaryotes and the first … biomarker evidence of eukaryotes,” said Galen Halverson, a professor at McGill University in Montreal.
There are many possible explanations for that paradoxical gap. Maybe eukaryotes were too scarce during that time to leave behind molecular fossil evidence. Or perhaps they were abundant, but their molecular fossils did not survive the harsh conditions of geologic time.
A recent study published in Nature offers an alternative explanation: Scientists may have been searching for the wrong fossilized molecules this entire time. When the study authors looked for more primitive versions of the chemicals others had been searching for, they discovered them in abundance — revealing what they described as “a lost world” of eukaryotes that lived 800 million to at least 1.6 billion years ago.
Introduction
“These molecules have been there all along,” said Jochen Brocks, a geochemist with the Australian National University in Canberra who co-led the study with his then-graduate student Benjamin Nettersheim. “We couldn’t find [them] because we didn’t know what they looked like.”
The findings bring new clarity to the dynamics of early eukaryotic life. The abundance of these molecular fossils suggests that the primitive organisms thrived in the oceans for hundreds of millions of years before the ancestors of modern eukaryotes took over, seeding life forms that would one day evolve into the animals, plants, fungi and protists that we see today.
“It’s an elegant hypothesis that seems to reconcile these very disparate records,” said Halverson, who was not involved in the study. “It makes everything make sense.”
The findings were welcome news for paleontologists like Phoebe Cohen, chair of geosciences at Williams College in Massachusetts who long thought something was missing in the biomarker record. “There is a rich and dynamic history of life before animals evolved that is harder to understand because we can’t see it,” Cohen said. “But it’s extremely important because it basically sets the stage for the world that we have today.”
The Protosteroid Puzzle
When the fossil record is underwhelming, scientists have other ways to estimate when different species branched off from one another in the evolutionary tree. Primary among those tools are molecular clocks: stretches of DNA that mutate at a constant rate, allowing scientists to estimate the passage of time. According to molecular clocks, the last common ancestor of modern eukaryotes, which belonged to a diverse collection of organisms known as the crown group, first emerged at least 1.2 billion years ago.
But the eukaryotic story doesn’t start there. Other early eukaryotes, known as the stem group, lived for hundreds of millions of years before our first common ancestor evolved. Researchers know little about them beyond the fact that they existed. The small handful of ancient eukaryote fossils that have been discovered are too ambiguous to be identified as stem or crown.
Introduction
In the absence of compelling body fossils, researchers look for molecular fossils. Molecular fossils, which preserve separately from body fossils, can be challenging for scientists to pin down. First they have to identify which molecules could have been produced only by the organisms they want to study. Then they have to deal with the fact that not all of those molecules fossilize well.
Organic material decays at different rates, and some parts of eukaryotes preserve in rock better than others. Tissues dissolve first. DNA might stick around for longer, but not too long: The oldest DNA ever found is around 2 million years old. Fat molecules, however, can potentially survive for billions of years.
Eukaryotes create vast quantities of fat molecules known as sterols, a type of steroid that’s a critical component of cell membranes. Since the presence of a cell membrane is indicative of eukaryotes, and fat molecules tend to persist in rock, sterols have become the go-to molecular fossil for the group.
Modern eukaryotes run on three major sterol families: cholesterol in animals, phytosterols in plants and ergosterol in fungi and some protists. Their synthesis starts with a linear molecule, which the cell molds into four rings so that the resulting shape fits perfectly into a membrane, Brocks said. That process has many stages: It takes another eight enzymatic steps for animal cells to make cholesterol, while plant cells require another 11 enzymatic steps to make a phytosterol.
On its way to building its advanced sterol, a cell creates a series of simpler molecules at each step in the process. When plugged into an artificial membrane, even those intermediate sterols provide the permeability and rigidity a cell needs to function as it ought. The biochemist Konrad Bloch, who was awarded the Nobel Prize in 1964 in part for discovering the cellular steps to make cholesterol, “was puzzled by that,” Brocks said. Why would a cell put in extra effort to make a more complicated sterol when a simpler molecule will do the job?
In 1994, Bloch wrote a book in which he predicted that each of these intermediate sterols had once been the end product used in the membrane of an ancestral eukaryotic cell. Each additional step may have required more of the cell’s energy, but the resulting molecule was a slight improvement over the previous one — enough of an upgrade to outcompete the precursor and take hold in evolutionary history.
If that were true, it would explain why no one had been able to find molecular fossils of sterols before the rapid expansion of modern eukaryotes some 800 million years ago. Researchers had been searching for cholesterols and other modern structures in the rock record. They didn’t realize that ancient biochemical pathways were shorter and that stem-group organisms didn’t make modern sterols: They made protosterols.
Molecular Coffee Grind
In 2005, about five years after Bloch died, Brocks and colleagues reported in Nature the first hints that such intermediary molecules once existed. In ancient sediments they had found unusually structured steroids they didn’t recognize. But at the time, Brocks didn’t consider that a eukaryote could have created them. “Back then, I was pretty convinced that they were bacterial,” he said. “No one was thinking about the possibility of stem-group eukaryotes at all.”
He continued sampling ancient rocks and looking for these curious molecules. About a decade into the work, he and Nettersheim realized that many of the molecular structures in rock samples looked “primitive” and not like the ones bacteria typically make, Brocks said. Could they be Bloch’s intermediate sterols?
Introduction
They needed more proof. In the decade that followed, Brocks and Nettersheim contacted petroleum and mining companies to request samples of any ancient sediments they had accidentally discovered during drilling expeditions.
“Most people would have found two examples and published,” said Andrew Knoll, a professor of natural history at Harvard University who was not involved in the study. (He was Brocks’ postdoctoral adviser years ago.) “Jochen spent the better part of the decade looking at rocks throughout the Proterozoic from all over the world.”
Meanwhile, the researchers created a search template to identify molecules in the sediment. They converted modern-day intermediate molecules made during sterol synthesis into plausible geological steroid equivalents. (Cholesterol, for example, fossilizes as cholestane.) “If you do not know what the molecule looks like, you will not see it,” Brocks said.
In the lab, they extracted fossil molecules from the sediment samples using a process that is “a bit like making coffee,” Nettersheim said. After crushing rocks, they added organic solvents to extract the molecules within — just as hot water is used to extract coffee from roasted and ground beans.
Introduction
To analyze their samples and compare them against their references, they used mass spectrometry, which determines the molecules’ weights, and chromatography, which reveals their atomic makeup.
The process is arduous. “You analyze hundreds of rocks and find nothing,” Brocks said. When you do find something, it’s often contamination from recent times. But the more samples they analyzed, the more fossils they found.
Some samples were filled to the brim with protosteroids. They found the molecules in rocks dating from 800 million to 1.6 billion years ago. It seemed that not only were ancient eukaryotes present for some 800 million years before modern eukaryotes took off, but they were abundant.
The researchers could even recognize the eukaryotes’ evolutionary process as their steroids became more complex. In 1.3-billion-year-old rocks, for example, they found an intermediate molecule that was more advanced than the 1.6-billion-year-old protosteroids, but not as advanced as modern steroids.
“That was a very clever way to deal with the missing record of molecular fossils,” said David Gold, a geobiologist at the University of California, Davis who was not involved in the study. Their discovery immediately filled an 800-million-year gap in the story of how modern life came to be.
A Lost World
The molecular findings, put together with genetic and fossil data, reveal the clearest picture yet of early eukaryotic dynamics from around 1 billion years ago during the mysterious mid-Proterozoic era, experts said. Based on Brocks and Nettersheim’s evidence, stem- and crown-group eukaryotes likely lived together for hundreds of millions of years and probably competed with each other during a period that geologists call the Boring Billion for its slow biological evolution.
The absence of the more modern steroids during this time suggests that the crown group didn’t immediately take hold. Rather, the membrane-bound organisms started small as they found niches in the ancient ecosystem, Gold said. “It takes a long time for [eukaryotes] to become ecologically dominant,” he said.
Introduction
At first, the stem group may have had an advantage. Oxygen levels in the atmosphere were significantly lower than they are today. Because building protosterols requires less oxygen and energy than modern sterols require, stem-group eukaryotes were likely more successful and abundant.
Their influence declined when the world hit a critical transition known as the Tonian Period. Between 1 billion and 720 million years ago, oxygen, nutrients and other cellular raw materials increased in the oceans. Fossils of modern eukaryotes, like algae and fungi, start to appear in the rock record, and modern steroids start to outnumber protosteroids in fossilized biomarkers — evidence that suggests crown-group eukaryotes had begun to thrive, increase in number and diversify.
Why would sterols become more complicated over time? The authors suggested that the more complex sterols bestowed some evolutionary advantage on their owners — perhaps related to dynamics in the creatures’ cell membranes. Whatever the reason, the sterol shift was evolutionarily significant. The makeup of modern sterols likely gave crown-group eukaryotes a boost over the stem group. Eventually, “this lost world of ancient eukaryotes was replaced by the modern eukaryotes,” Brocks said.
A Bacterial Wrinkle
The researchers’ evolutionary sterol story is compelling, but it’s not rock solid.
“I wouldn’t be surprised” if their interpretation is correct, Gold said. However, there is another possibility. Although scientists tend to associate sterols with eukaryotes, some bacteria can also make them. Could the molecular fossils in the study have been left by bacteria instead?
Gordon Love, a geochemist at the University of California, Riverside, thinks the bacterial scenario makes more sense. “These protosteroids turn up in rocks of all ages,” he said. “They don’t just disappear, which means that something other than stem eukaryotes is capable of making those.” He argued that bacteria, which dominated the sea at the time, could have easily produced protosteroids.
The authors can’t rule out that possibility. In fact, they suspect that some of their fossil molecules were made by bacteria. But the possibility that their vast collection of fossilized protosteroids, stretching for hundreds of millions of years, was made entirely by bacteria seems unlikely, Brocks said.
“If you look at the ecology of these bacteria today, and their abundance, there is just no reason to believe that they could become so abundant that they could have produced all these molecules,” he said. In the modern world, bacteria produce protosterols only in niche environments such as hydrothermal springs or methane seeps.
Cohen, the Williams College paleontologist, agrees with Brocks. The interpretation that these molecules were made by eukaryotes “is consistent with every other line of evidence,” she said — from the fossil record to molecular clock analyses. “I’m not as worried” about that possibility, she said.
Either interpretation presents more questions than answers. “Both stories would be absolutely crazy weird,” Brocks said. They are “different views of our world,” he added, and it would be nice to know which one is true.
Lacking a time machine, the researchers are searching for more evidence to improve their certainty one way or the other. But there are only so many ways to reconstruct or perceive ancient life — and even scientists’ best guesses can never completely fill the gap. “Most life didn’t leave any traces on Earth,” Nettersheim said. “The record that we see is limited. … For most of Earth’s history, life might have looked very different.”
Quanta is conducting a series of surveys to better serve our audience. Take our biology reader survey and you will be entered to win free Quanta merchandise. | Biology |
Videos appear to show shimmering chemical contamination on creeks near the site of the East Palestine, Ohio, train derailment and chemical leak.
Experts tell USA TODAY the rainbow-colored material is likely vinyl chloride, a heavier-than-water chemical that both leaked and burned following the Feb. 3 derailment of a Norfolk Southern freight train. The videos mark yet another example of heightened health and environmental concerns in the wake of the disaster.
Authorities say about 3,500 small fish were killed in the creeks surrounding the derailment site shortly after the crash, leak and burn, but they have not reported significant subsequent deaths. Meanwhile, a new federal lawsuit claims fish and wild animals are dying as far as 20 miles away from the site of the derailment.
Here's what to know about the videos:
What do the videos show?
The videos posted by several people, including Ohio Republican Sen. J.D. Vance show rainbow-colored slicks spreading across the surface of small streams in the area after people poked the creek beds with sticks or threw rocks in.
"This is disgusting," Vance declared as sheen spread across what he said was Leslie Run creek.
BACKGROUND: Is the Ohio River contaminated? East Palestine train derailment sparks concerns over water
What is going on in the videos?
John Senko, a professor of geosciences and biology at the University of Akron, said the videos depict what appears to be vinyl chloride, which would sink to the bottom of a lake or stream because it's denser than water.
"It looks like what's happening is you got some of that stuff on the bottom of the creek, you stir it up a little bit, it starts to come up and then it's just going to sink again," he said. "So that stuff's behaving like I would expect vinyl chloride to behave.”
What are the health risks of the creek contamination?
The videos are evidence that groundwater contamination has occurred, experts told the USA TODAY Network. But contamination does not necessarily mean there's a health risk.
The U.S. Environmental Protection Agency sets limits for what's deemed acceptable exposure to many chemicals, and says short-term exposure to high levels of vinyl chloride in the air can make people dizzy or give them headaches, while long-term exposure can cause liver damage.
Dr. Kari Nadeau, the chair of Harvard's Environmental Health Department, said the oily sheen was likely left by burned chemicals that drifted back down to the ground and into the water.
"The information that I know as a public health expert, as well as from what the EPA is telling us right now, the EPA is letting us know that there are not dangerous levels of toxins in the water or the air at the current time," she said.
What health concerns are there after Ohio train derailment?
Ohio Gov. Mike DeWine has asked CDC doctors and experts to help screen area residents for illness, and state and federal environmental experts are overseeing monitoring and cleanup efforts.
Ground water contamination: The crash and subsequent fire released chemicals into the air and onto the ground and a stream nearby. Experts say the ground and water contamination likely pose the biggest risk now.
Air quality: Federal authorities have tested more than 450 homes for volatile organic compounds, which could pose a health risk.
Private wells: Ohio Department of Health Director Bruce Vanderhoff said Tuesday that the air and water quality around East Palestine is generally safe, but private wells are in the process of being tested. Until those results are in, Vanderhoff encouraged residents with a private water supply to drink and use bottled water.
What's being done to clean up?
The spill happened closest to Sulfur Run creek, and authorities have damned it above and below the spill area. They're currently pumping the clean creek around the contamination area, and then remediating any contaminated water flowing into the short section of the dry creek bed.
Norfolk Southern has said it will install wells to monitor groundwater. Officials will also sample soil in key areas, including near where the cars filled with vinyl chloride burned.
EPA controversy explained
Many conservative lawmakers have complained the EPA has not responded aggressively enough to the spill. The EPA says Ohio and other federal agencies are better suited to assist.
Vance in particular has attacked the EPA and challenged officials to drink the water in the streams in East Palestine.
Underlying the discussion: The EPA has 20% fewer employees today than it did at its peak in 1999, when about 18,100 people worked there.
The EPA's annual budget hit a high of $10.3 billion in 2010, and today sits at $9.5 billion. If the budget had kept up with inflation, it would be $14 billion. In 2017, then-President Trump proposed a 31% cut to the EPA's annual budget, although Congress ultimately rejected most of his cuts.
President Biden has proposed a 2023 EPA budget of $11.8 billion, including hiring an extra 1,900 workers.
The 2021 Bipartisan Infrastructure Law also provided billions in additional funding for programs overseen by the EPA, including environmental justice and cleanups. Most of the EPA's funding actually gets passed through to states and local governments, according to the agency.
Ohio is among 24 states suing the federal government over the EPA's plans to toughen environmental regulations and pollution limits in small streams and wetlands over a long-disputed "Waters of the United States" rule. That lawsuit was filed Thursday.
Contributing: Kelly Byer, The Repository
This article originally appeared on USA TODAY NETWORK: East Palestine, Ohio, train derailment: Videos show 'disgusting' water | Biology |
Why it's worth protecting a spectacular fossil site NZ almost lost to commercial mining interests
One of New Zealand's most exceptional fossil sites may soon be open to scientists again following a land purchase that saved it from commercial mining.
Foulden Maar is a small, deep lake that formed 23 million years ago in Otago, at the start of the Miocene epoch when New Zealand's climate was much warmer and wetter. A rainforest thrived around the lake's fringes, and algae known as diatoms bloomed each summer.
As the algal blooms died off and sank to the lake bed, they formed sedimentary layers of diatomite and preserved the most exquisite and delicate fossils of flowers, insects and fish as well as a climate record covering 100,000 years.
But the diatomite was also of interest to mining company Plaman Resources—until, following long negotiations, the Dunedin City Council bought most of the land earlier this year.
Fossil sites are relatively common, but examples representing entire ecosystems are extremely rare. Foulden Maar is one of only two such sites in New Zealand that preserve ecological interactions and features such as eyes, skin, stomach contents and original color patterns.
Such sites yield remarkable information about the history of life, which is impossible to obtain from other sources.
The other site is the nearby Hindon Maar complex, which is 15 million years old. Both sites preserve ecosystems of small crater lakes and the animals and plants of surrounding rainforests. Recent discoveries at these sites are transforming our understanding of New Zealand's past biodiversity and climate.
Lake ecosystems: freshwater galaxiids and eels
All fossils of the iconic southern-hemisphere family Galaxiidae derive from Otago Miocene lake deposits, with the entire life cycle from larvae (whitebait) to juveniles and fully grown adults present at Foulden Maar.
Remarkable preservation of numerous articulated skeletons includes eyes, gaping mouths and skin, the last including the star-like patterning that gave galaxiids their name.
Gut contents and an abundance of fossilized poo (coprolites) provide evidence of a changing diet. Larvae dined on diatoms while adults were lake-margin ambush predators feeding on terrestrial and aquatic insects. Fish debris in other less common coprolites show the galaxiids were themselves prey.
Slender, elongated articulated fish skeletons with rows of curved conical teeth provide the only southern-hemisphere records of the freshwater eel, Anguilla. This was likely the top predator in these lakes.
Trapped in these small, closed lakes, the eels would have been unable to return to the sea to breed and were effectively "living dead," unlike the galaxiids which could reproduce in the maars.
Forest ecosystems: insects, spiders, leaves, flowers
These ancient maar lakes also contain a treasure trove of spiders and insects. When our research program began in 2003, only six fossil insects more than 2 million years old were known from New Zealand. We now have more than 600, almost all different.
Foulden Maar has yielded 270 insects from 17 genera in 15 families and nine orders.
Spiders are commonly the top terrestrial invertebrate predators in modern New Zealand forests, but rarely fossilize because they lack hard parts. But in Foulden Maar, we found several specimens, including a juvenile trapdoor spider.
Early studies at Hindon Maar have already added 240 more insects in five orders and 20 families.
Insects are often completely preserved with details of antennae, fragile wings and compound eyes visible.
Fossils from the Foulden and Hindon maars include ancient lineages of termites, armored scale insects in life position along leaf veins, bark bugs and a lace bug that probably lived on Astelia (kakaha, bush lily) as its close living relative does today.
Others include leaf beetles with structural color, weevils, rove beetles, numerous ants and wasps, caddis flies with larvae still in their cases, crane flies with well preserved compound eyes and a hairy cicada whose closest relative today is found in Tasmania.
These taxa are only the tip of the taxonomic iceberg. Hundreds more terrestrial arthropods are also being revealed in our research on inclusions in New Zealand amber—90 specimens in one block of layered amber alone.
Rainforest leaves and flowers
Myriad leaves with excellent preservation show that both maars were surrounded by subtropical to warm, temperate rainforests, dominated by members of the laurel and cinnamon plant families at Foulden Maar and a southern beech forest at Hindon.
To date we have recorded at least 100 species from 35 plant families between the sites, including many taxa now extinct locally, but with relatives still living in New Caledonia, Australia and South America.
Of particular importance are diverse fossil flowers with reproductive structures such as petals, stamens and anthers with pollen still present, as well as abundant fossilized fruits and seeds.
These reproductive structures are treasures of a different kind—fragile, seasonal and fleeting. But they provide critical information about the ecology of the parent plants and their possible pollination and dispersal mechanisms.
Close comparisons to the biology of living plants also suggest the fossil species reproduced in a similar manner to their living relatives. This implies that reproductive mechanisms were conserved for 23 million years in the New Zealand flora.
Currently, the Dunedin City Council is exploring management options for the site, which will once again allow public and scientific access to this remarkable fossil-rich, ancient lake deposit well into the future.
Provided by The Conversation | Biology |
Researchers generate cattle blastoids in lab to aid farm animal reproduction
UT Southwestern Medical Center stem cell and developmental biologists and colleagues have developed a method to produce bovine blastoids, a crucial step in replicating embryo formation in the lab that could lead to the development of new reproductive technologies for cattle breeding.
Current efforts are hampered by a limited supply of embryos, so understanding the mechanisms for creating successful bovine blastocyst-like structures in the lab could prove valuable for improving reproduction in cattle. This technology could potentially lead to faster genetic gains in beef or dairy production or to reducing disease incidence in the animals.
"This study is the first demonstration of generating blastocyst-like structures (blastoids) from a livestock species," said Jun Wu, Ph.D., Assistant Professor of Molecular Biology and the Virginia Murchison Linthicum Scholar in Medical Research. "With further optimization, the advancements in bovine blastoid technology could pave the way for innovative artificial reproductive methodologies in cattle breeding. This could, in turn, revolutionize traditional approaches to cattle breeding and herald a new era in livestock industry practices."
Bovine blastoids represent a valuable model to study early embryo development and understand the causes of early embryonic loss, said Dr. Wu.
In this study, appearing in Cell Stem Cell, researchers demonstrated that bovine blastoids can be assembled from cultured stem cells. The international group from Brazil, China, and the U.S. describes an efficient method for generating bovine blastoids and for growing them in vitro in the lab.
Using immunofluorescence and single-cell RNA sequencing analyses, researchers showed that the lab-generated blastoids resembled natural bovine blastocysts in morphology, size, cell number, and lineage composition, and could produce maternal recognition signaling upon transfer to recipient cows.
"We were able to develop an efficient and robust protocol to generate bovine blastoids by assembling bovine embryonic and trophoblast stem cells that can self-organize and faithfully re-create all three blastocyst lineages," said lead author Carlos A. Pinzón-Arteaga, D.V.M., M.S., a student in the UT Southwestern Graduate School of Biomedical Sciences. "Future comparisons with in vivo-produced embryos are still needed to better evaluate the blastoid model."
A readily available stem cell embryo model could greatly benefit research on embryonic development, as it addresses the challenge of limited access to early embryos, noted Dr. Wu, a New York Stem Cell Foundation-Robertson Investigator and member of the Hamon Center for Regenerative Science and Medicine and Cecil H. and Ida Green Center for Reproductive Biology Sciences at UT Southwestern.
This study builds on previous findings from the Wu lab that reported the generation of similar mouse (Li et al., Cell, 2019) and human (Yu et al., Nature, 2021) embryo models. The work from the Wu lab has contributed to the development of novel culture systems and methods that enable the generation of new stem cells for basic and translational studies.
Dr. Wu's work has expanded the spectrum of pluripotent states by capturing mouse pluripotent stem cells with distinct molecular and phenotypic features from different developmental stages. And some of these culture conditions developed in mice enabled the generation of pluripotent stem cell models from many other mammalian species, including humans, nonhuman primates, and ungulates.
In addition to stem cell-derived blastocyst models (blastoids), the Wu lab uses interspecies chimeras to study fundamental biology such as conserved and divergent developmental programs, determination of body and organ size, species barriers, and cancer resistance. The lab also works to develop new applications for regenerative medicine.
Other UTSW researchers who contributed to this study are Lizhong Liu, Ph.D., Assistant Instructor of Molecular Biology, and Masahiro Sakurai, Ph.D., Assistant Instructor of Molecular Biology.
More information: Carlos A. Pinzón-Arteaga et al, Bovine blastocyst-like structures derived from stem cell cultures, Cell Stem Cell (2023). DOI: 10.1016/j.stem.2023.04.003
Provided by UT Southwestern Medical Center | Biology |
The wrong chemical dosage has been used to protect crops against mice as they exhibit lower sensitivity to zinc phosphide than previously thought, new research shows.Australia’s national science agency, the CSIRO, has recommended a new bait formulation that doubles the amount of zinc phosphide (ZnP) in grain baits used in broadscale agriculture.The research came in response farmers’ concerns that baits were not being effective, even before the mouse plague of 2021, when mouse numbers surged despite thousands of tonnes of poisons being deployed.The plague wreaked havoc on agricultural activity and mental health in regional New South Wales and Queensland for more than 10 months, and the effects are still being seen in Western Australia, where mouse numbers remain high.Steve Henry, a CSIRO research officer specialising in the impact of mice on the grain industry, said: “We’ve listened to farmers concerns, and we then went and did some trials to address those concerns, and we found that their concerns were completely founded.”Zinc phosphide-coated wheat bait is the only registered in-crop rodenticide in Australia, and has previously been approved for use in a concentration of 25g/kg of active zinc phosphide per wheat grain.The new research is the first laboratory-based mouse bait efficacy study in Australia since the chemical was registered for agricultural use about 20 years ago. Sign up to receive Guardian Australia’s fortnightly Rural Network email newsletter Henry said the 25g dose can work well in instances where there is not much other food around to distract mice, and they can quickly find two or three grains of the bait, which is enough to get a lethal dose.If not, he said, “the bait is competing with those other foods for the attention of a mouse”.“If they have … a dose that doesn’t kill them, then they become bait averse and they stop eating the bait.”The initial research prompted emergency permits to be issued during the height of the plague, allowing bait producers to double the toxicity of their products from 25g/kg to 50g/kg.Since then, CSIRO researchers have conducted a series of studies to reassess the sensitivity of mice to zinc phosphide in the laboratory, as well as the effectiveness of a new bait formulation in the field.Henry said the final study conducted near Parkes in western NSW confirmed that 50g/kg of zinc phosphide to grain bait is required to consistently reduce mouse populations, achieving “more than an 80% reduction in mouse populations more than 90% of the time”. Rodenticides have been known to have negative effects on non-target species. John Grant, a spokesperson for Wires, Australia’s largest wildlife rescue organisation, said although Wires did not conduct any tests proving animals died from mouse bait poisoning, there were many incidents of mass deaths (upwards of 50 birds) from suspected poisoning during the mouse plague.Henry said zinc phosphide is not a danger to non-target species, but “it’s very easy to confuse the issues associated with the use of anticoagulants with zinc phosphide”.“Only farmers are allowed to use zinc phosphide in broadscale agriculture, but the [anticoagulant] toxins that are available for use around towns and around houses are very different to zinc phosphide, and there is a chance of secondary poisoning associated with those baits.”Zinc phosphide does not cause secondary poisoning because the poison gets used up in the act of damaging the mouse’s major organs, so none is left in the mouse to poison other animals that eat those mice, Henry said.He said there is little chance of native wildlife being directly poisoned by zinc phosphide because there are very few or no other mammals that live in the relevant paddocks.When it comes to birds, he says the colour of the bait doesn’t fit within the spectrum of birds’ vision as it’s a dark-coloured bait.He said farmers also aren’t supposed to spread the bait within a 50m of boundary fences where birds inhabit the trees.Henry believed any incidents of animal death during the mouse plague were likely caused by different kind of mouse bait to zinc phosphide.Robert Davis, a senior lecturer in wildlife biology at Edith Cowan University, said “I agree there will be little to no risk of secondary poisoning.”“However, it is not true to say native species are not at risk from consuming these baits,” Davis said.“I see them as a risk to any native rodents of which there are many species in Australia and a range of other native marsupials including possums, bandicoots, wallabies and kangaroos.”“However, the issue of mouse plagues is a tricky one and at the end of the day, this may be the lesser of the evils if used very carefully and sparingly,” Davis said. Sign up for the Rural Network email newsletter Join the Rural Network group on Facebook to be part of the community | Biology |
Boys choir found to compete sexually for female audiences through more energetic singing
Research led by Western Sydney University, Australia, has found that boys singing in a choir engage in simultaneous group cohesion and sexually motivated competition exhibited through voice modulation in the presence of a female audience.
In a paper, "Sex-related communicative functions of voice spectral energy in human chorusing," published in The journal Biology Letters, the team explores the evolutionary origins of music, suggesting that it may have developed from capacities supporting both cooperation and competition.
Similar to interactive displays in non-human animals, human music may function both cooperatively and competitively, allowing different forms of communication to occur simultaneously at both group and individual levels.
The St Thomas Choir of Leipzig was studied to understand how male singers modify their behavior in the presence of female audience members. A previous study with the boy choir found that basses (the oldest boys with the deepest voices) exhibited increased energy in the "singer's formant" frequency region when girls were in the audience.
The study, conducted online, involved listening tasks to test female and male sensitivity and preferences for this subtle vocal modulation in choir performances. The study involved 679 females and 481 males in the sensitivity study and 655 females and 432 males in the preference study, with varying ages and musical training.
The stimulus set included audio excerpts from performances of a Chorale and a Fugue by the St Thomas Choir of Leipzig, with and without girls in the audience. Both female and male listeners were sensitive to increased high-frequency spectral energy in the singer's formant, the especially energetic peaks that usually coincided with vowels.
Only female listeners exhibited a reliable preference for the enhanced singer's formant, regardless of the type of musical piece. Male listeners did not show a preference for higher-energy singing, and those performing in the choir only enhanced that high energy when the audience included females.
The findings suggest that human chorusing represents a flexible form of social communicative behavior that brings out a subtle form of sexually motivated competition.
More information: Peter E. Keller et al, Sex-related communicative functions of voice spectral energy in human chorusing, Biology Letters (2023). DOI: 10.1098/rsbl.2023.0326
Journal information: Biology Letters
© 2023 Science X Network | Biology |
CRISPR/Cas9 reveals a key gene involved in the evolution of coral skeleton formation
New work led by Carnegie's Phillip Cleves uses cutting-edge CRISPR/Cas9 genome editing tools to reveal a gene that's critical to stony corals' ability to build their reef architectures. It is published in Proceedings of the National Academy of Sciences.
Stony corals are marine invertebrates that build large skeletons, which form the basis of reef ecosystems. These biodiversity hotspots are home to about a quarter of known marine species.
"Coral reefs have tremendous ecological value," said Cleves. "But they are in decline due to human activity. Carbon pollution that we spew into the air is both warming oceans—causing fatal bleaching events—and altering seawater chemistry—resulting in ocean acidification that impedes reef growth."
Over time, the excess carbon dioxide released into our atmosphere by burning fossil fuels is taken up into the ocean, where it reacts with the water to form an acid that is corrosive to coral, shellfish, and other marine organisms.
Stony corals are vulnerable to ocean acidification because they construct their skeletons by the accretion of calcium carbonate, a process called calcification, which becomes increasingly difficult as the surrounding water's pH decreases. Because of the importance of coral skeleton formation in building reefs, a major research focus has been to understand the genes controlling the process and how it has evolved in corals.
For several years, Cleves' lab has used the Nobel Prize-winning CRISPR/Cas9 technology to identify cellular and molecular processes that could help guide coral conservation and rehabilitation efforts. For example, they previously revealed a gene that is critical to how a coral responds to heat stress—information that may help predict how corals will handle future bleaching events.
Now, his team—including Carnegie's Amanda Tinoco—used genome editing tools to determine that a particular gene, called SLC4γ, is required for young coral colonies to begin building their skeletons. The protein it encodes is responsible for transporting bicarbonate across cellular membranes. Interestingly, SLC4γ is only present in stony corals, but not in their non-skeleton-forming relatives. Together, these results imply that stony corals used the novel gene, SLC4γ, to evolve skeleton formation.
"By applying cutting-edge molecular biology techniques to pressing environmental problems, we can reveal the genes that determine ecologically important traits." Cleves concluded. "In developing these genetic tools to study coral biology, we can greatly improve our understanding of their biology and learn how to mount successful conservation efforts for these fragile communities."
Other co-authors on the study include Carnegie's Lorna Mitchison-Field, Jacob Bradford and Dimitri Perrin of Queensland University of Technology, Christian Renicke and John Pringle of Stanford University, and Line Bay of the Australian Institute of Marine Science.
More information: Tinoco, Amanda I. et al, Role of the bicarbonate transporter SLC4γ in stony-coral skeleton formation and evolution, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2216144120
Journal information: Proceedings of the National Academy of Sciences
Provided by Carnegie Institution for Science | Biology |
Jaw shapes of 90 shark species show evolution driven by habitat
An international research team led by Faviel A. López-Romero from the Department of Paleontology at the University of Vienna investigated how the jaw shape of sharks has changed over the course of evolution. They conclude that in the most widespread shark species, the jaws show relatively little variation in shape over millions of years; most variable jaws were found for deep-sea sharks. The results of this study were published in the journal Communications Biology.
One of the most prominent traits in sharks is the shape of their lower jaws, which bear also impressive teeth. With their jaws, sharks are able to feed on a wide variety of prey, which also places them among the Ocean's top predators. The wide prey spectrum is also reflected in the corresponding adaptations that sharks have evolved through out their evolutionary history. All of these adaptations allow them to spread into virtually all marine habitats, with some species even venturing into freshwater.
How the shape of shark jaws changed during their evolution has now been investigated by an international and multidisciplinary research team, from the University of Vienna, Imperial College London (UK), Muséum national d'histoire naturelle (Paris, France), Christian-Albrechts-University (Kiel, Germany), and Naturalis Museum (Leiden, The Netherlands). The results illustrate the importance of prey, level in the marine webs and habitat in relation to jaw shape diversity among shark species. This also helps to uncover the evolutionary causes of the differences in jaw morphology related to habitats.
Today's sharks have a long evolutionary history with some taxa that can be traced as far as 180 million years ago. During all this time they have been a key component in the fauna of the marine realm and its food webs, mainly occupying higher trophic positions as meso and top predators. At the same time, sharks adopted many lifestyles and forms like bottom dwellers, fast swimmers in the open sea, and even some of the smallest species in the deep sea.
To study the potential relationship between jaw morphology and the sharks' life style, a quantitative analysis was conducted using X-ray computed tomographic scans of the jaws of 90 shark species and preparing 3D reconstructions to estimate how jaw shape of sharks evolved through time.
The results indicate surprisingly that among highly species-rich groups such as requiem sharks, the jaws display low shape variations. This is interesting since requiem sharks are one of the most widely distributed group of sharks. Another interesting finding is that most variable jaws were found among species living in the deep sea. "Although sharks from the deep sea are not as extensively represented in the data as reef sharks, they display the most disparate forms seen in our analysis," explains first author López-Romero.
Among many adaptations, sharks inhabiting the deep sea exhibit, in addition to bioluminescence various feeding strategies that range from taking big chunks out of whales to feeding on eggs or on cephalopods. For most of the species found in reefs and the large top predators in the open sea, the options seem more limited so that most mainly prey on fishes, and even other shark species.
"Of course, many sharks in these environments feed on a large variety of prey with only few having adapted to a single, specific prey, such as the bonnethead shark, Sphyrna tiburo, which preys almost entirely on hard-shelled crabs, while shrimps and fish are only capture occasionally," Jürgen Kriwet from the University of Vienna states, who was involved in this study.
By studying the evolution of the jaw shape, it also was possible to reconstruct the evolutionary changes in jaw shape through deep time. "Remarkable changes occurred in carpet, sleeper, and dogfish sharks. These changes were probably concomitant with the clear distribution of these sharks in reefs and the deep sea, which noticeably distinguishes them morphologically from other species with larger jaws as seen in the top predators in the open sea," concludes López-Romero.
More information: Faviel A. López-Romero et al, Shark mandible evolution reveals patterns of trophic and habitat-mediated diversification, Communications Biology (2023). DOI: 10.1038/s42003-023-04882-3
Journal information: Communications Biology
Provided by University of Vienna | Biology |
Plants orient their organs in response to the gravity vector, with roots growing towards gravity and shoots growing in the opposite direction. The movement of statoliths responding to the inclination relative to the gravity vector is employed for gravity sensing in both plants and animals. However, in plants, the statolith takes the form of a high-density organelle, known as an amyloplast, which settles toward gravity within the gravity sensing cell. Despite the significance of this gravity sensing mechanism, the exact process behind it has eluded scientists for over a century. A groundbreaking study led by Professor Miyo Terao Morita at the National Institute for Basic Biology (NIBB) in Japan has revealed that the translocation of signaling proteins from amyloplasts to the plasma membrane is the key to deciphering this enigmatic mechanism. The research, titled " Cell polarity linked to gravity sensing is generated by LZY translocation from statoliths to the plasma membrane," is now available online in Science ahead of print.
For years, researchers speculated on the gravity sensing mechanism, with hypotheses such as the force sensing model and position sensing hypothesis. However, definitive evidence for each remained elusive, until now. In their earlier work, the team discovered that Arabidopsis LAZY1-LIKE (LZY) proteins play a crucial role in gravity signal transduction, with polar localization at the plasma membrane on the side of gravity. Nevertheless, the exact mechanism establishing this remarkable localization remained unknown.
Through sophisticated live cell imaging techniques, including vertical stage microscopy and optical tweezers, the research team made a significant breakthrough. They found that LZYs not only localize at the plasma membrane near amyloplasts but also at the amyloplasts themselves. "The plasma membrane localization of LZYs surprised us, as it is generated by the close proximity of amyloplasts to the membrane," explained Takeshi Nishimura, Assistant Professor at NIBB and the first author of the study.
"We demonstrated that localization on both the plasma membrane and amyloplasts is necessary for gravity signaling in roots, indicating its fundamental role in this process," added Hiromasa Shikata, Assistant Professor at NIBB and the co-first author.
Professor Miyo Terao Morita further emphasized, "LZYs act as signal molecules, transmitting positional information from amyloplasts to the plasma membrane, where the regulation of auxin transport occurs." This revelation provides compelling support for the "position sensor hypothesis," explaining gravity sensing in plants through the proximity or the contact between statoliths and the plasma membrane.
LAZY1 was originally identified as the responsible gene for the rice gravitropism mutant. Its counterparts are conserved across various land plants, pinpointing their fundamental significance. The distinctive "lazy" phenotype, marked by the lateral spreading of branches and roots, has manifested in mutants of these genes in various plant species, including crops. Further studies on LZY may impact technology for controlling plant architecture and production.
Story Source:
Journal Reference:
Cite This Page: | Biology |
August 22, 2023
Salk scientists pinpointed specific microbes and bile acids that become more prevalent in the guts of mice fed high-fat diets
Salk scientists pinpointed specific microbes and bile acids that become more prevalent in the guts of mice fed high-fat diets
LA JOLLA—The prevalence of colorectal cancer in people under the age of 50 has risen in recent decades. One suspected reason: the increasing rate of obesity and high-fat diets. Now, researchers at the Salk Institute and UC San Diego have discovered how high-fat diets can change gut bacteria and alter digestive molecules called bile acids that are modified by those bacteria, predisposing mice to colorectal cancer.
In the study, published in Cell Reports on August 22, 2023, the team found increased levels of specific gut bacteria in mice fed high-fat diets. Those gut bacteria, they showed, alter the composition of the bile acid pool in ways that cause inflammation and affect how quickly intestinal stem cells replenish. Bile acids are molecules produced by the liver and used by the gut to help digest food and absorb cholesterol, fats, and nutrients.
“The balance of microbes in the gut is shaped by diet, and we are discovering how alterations in the gut microbial population (the gut microbiome) can create problems that lead to cancer,” says co-senior author and Professor Ronald Evans, director of Salk’s Gene Expression Laboratory. “This paves the way toward interventions that decrease cancer risk.”
In 2019, Evans and his colleagues showed in mice how high-fat diets boosted the overall bile acid levels. The shift in bile acids, they found, shut down a key protein in the gut—called the Farnesoid X receptor (FXR)—and increased the prevalence of cancer.
However, there were still missing links in the story, including how the gut microbiome and bile acids are changed by high-fat diets.
In the new work, Evans’ group teamed up with the labs of Rob Knight and Pieter Dorrestein at UC San Diego to examine the microbiomes and metabolomes—collections of dietary and microbially derived small molecules—in the digestive tracks of animals on high-fat diets. They studied mice with a genetic mutation that makes them more susceptible to colorectal tumors.
The scientists discovered that although mice fed high-fat diets had more bile acids in their guts, it was a less diverse collection with a higher prevalence of certain bile acids that had been changed by gut bacteria. They also showed that these modified bile acids affected the proliferation of stem cells in the intestines. When these cells don’t replenish frequently, they can accumulate mutations—a key step toward encouraging the growth of cancers, which often arise from these stem cells.
“We are only just beginning to understand these bacterially-conjugated bile acids and their roles in health and disease,” says co-author Michael Downes, a staff scientist at Salk.
There were also striking differences in the microbiomes of the mice on high-fat diets: the collections of gut bacteria in these mice’s digestive tracts were less diverse and contained different bacteria than the microbiomes of mice not on high-fat diets. Two of these bacteria—Ileibacterium valens and Ruminococcus gnavus—were able to produce these modified bile acids.
The scientists were surprised to discover that a high-fat diet actually had a greater impact on the microbiome and modified bile acids than a genetic mutation that increases cancer susceptibility in the animals.
“We’ve pinpointed how high-fat diet influences the gut microbiome and reshapes the bile acids pool, pushing the gut into an inflamed, disease-associated state,” says co-first author Ting Fu, a former postdoctoral fellow in the Evans lab.
The researchers believe high-fat diets change the composition of the microbiome, encouraging the growth of bacteria like I. valens and R. gnavus. In turn, that boosts levels of modified bile acids. In a vicious cycle, those bile acids create a more inflammatory environment that can further change the makeup of gut bacteria.
“We’ve deconstructed why high-fat diets aren’t good for you, and identified specific strains of microbes that flare with high-fat diets,” says Evans, March of Dimes Chair in Molecular and Developmental Biology. “By knowing what the problem is, we have a much better idea of how to prevent and reverse it.”
In the future, the team will study how quickly the microbiome and bile acids change after an animal begins eating a high-fat diet. They also plan to study ways to reverse the cancer-associated effects of a high-fat diet by targeting FXR—the protein that they previously discovered to be associated with bile acid changes.
Other authors of the paper are Tae Gyu Oh, Justin L McCarville, Fritz Cayabyab, Mingxiao He, Ruth T. Yu, Annette Atkins, and Janelle Ayres of Salk; Gibraan Rahman, Hui Zhi, Zhenjiang Xu, Anupriya Tripathi, Cameron Martino, Qiyun Zhu, Fernando Vargas, and Manuela Raffatellu of UC San Diego; Tao Huan, Jian Guo, Brian Low, and Shipei Xing of University of British Columbia; and Sally Coulter and Christopher Liddle of University of Sydney.
The work was supported by grants from the National Cancer Institute (CA014195), the National Institutes of Health (CA265762-01, DP1 AT010885, AI126277, AI145325, AI154644, AI114625, P01HL147835, R01DK057978), the collaborative microbial metabolite center (1U24DK133658-54701), a UC San Diego Postdoc Microbiome Center Seed Pilot Grant, a Hewitt Medical Foundation Fellowship, a Salk Alumni Fellowship, a Crohn’s & Colitis Foundation (CCFA) Visiting IBD Research Fellowship, the Lustgarten Foundation (122215393-02), the NOMIS Foundation, a SWCRF Investigator Award, the David C. Copley Foundation, the Wasily Family Foundation, the Don and Lorraine Freeberg Foundation, and the Burroughs Wellcome Fund.
DOI: 10.1016/j.celrep.2023.112997
JOURNAL
Cell Reports
TITLE
AUTHORS
Ting Fu, Tao Huan, Gibraan Rahman, Hui Zhi, Zhenjiang Xu, Tae Gyu Oh, Jian Guo, Sally Coulter, Anupriya Tripathi, Cameron Martino, Justin L McCarville, Qiyun Zhu, Fritz Cayabyab, Brian Low, Mingxiao He, Shipei Xing, Fernando Vargas, Ruth T. Yu, Annette Atkins, Christopher Liddle, Janelle Ayres, Manuela Raffatellu, Pieter C. Dorrestein, Michael Downes, Rob Knight, Ronald M. Evans
Office of Communications
Tel: (858) 453-4100
[email protected]
Unlocking the secrets of life itself is the driving force behind the Salk Institute. Our team of world-class, award-winning scientists pushes the boundaries of knowledge in areas such as neuroscience, cancer research, aging, immunobiology, plant biology, computational biology and more. Founded by Jonas Salk, developer of the first safe and effective polio vaccine, the Institute is an independent, nonprofit research organization and architectural landmark: small by choice, intimate by nature and fearless in the face of any challenge. | Biology |
Nanosatellite shows the way to RNA medicine of the future
The RNA molecule is commonly recognized as messenger between DNA and protein, but it can also be folded into intricate molecular machines. An example of a naturally occurring RNA machine is the ribosome, that functions as a protein factory in all cells.
Inspired by natural RNA machines, researchers at the Interdisciplinary Nanoscience Center (iNANO) have developed a method called "RNA origami," which makes it possible to design artificial RNA nanostructures that fold from a single stand of RNA. The method is inspired by the Japanese paper folding art, origami, where a single piece of paper can be folded into a given shape, such as a paper bird.
Frozen folds provide new insight
The research paper in Nature Nanotechnology describes how the RNA origami technique was used to design RNA nanostructures, that were characterized by cryo-electron microscopy (cryo-EM) at the Danish National cryo-EM Facility EMBION. Cryo-EM is a method for determining the 3D structure of biomolecules, which works by freezing the sample so quickly that water does not have time to form ice crystals, which means that frozen biomolecules can be observed more clearly with the electron microscope.
Images of many thousands of molecules can be converted on the computer into a 3D map, that is used to build an atomic model of the molecule. The cryo-EM investigations provided valuable insight into the detailed structure of the RNA origamis, which allowed optimization of the design process and resulted in more ideal shapes.
"With precise feedback from cryo-EM, we now have the opportunity to fine-tune our molecular designs and construct increasingly intricate nanostructures," explains Ebbe Sloth Andersen, associate professor at iNANO, Aarhus University.
Discovery of a slow folding trap
Cryo-EM images of an RNA cylinder sample turned out to contain two very different shapes, and by freezing the sample at different times it was evident that a transition between the two shapes was taking place. Using the technique of small-angle X-ray scattering (SAXS), where the samples are not frozen, the researchers were able to observe this transition in real time and found that the folding transition occurred after approx. 10 hours.
The researchers had discovered a so-called "folding trap" where the RNA gets trapped during transcription and only later gets released.
"It was quite a surprise to discover an RNA molecule that refolds this slow since folding typically takes place in less than a second" tells Jan Skov Pedersen, Professor at Department of Chemistry and iNANO, Aarhus University.
"We hope to be able to exploit similar mechanisms to activate RNA therapeutics at the right time and place in the patient," explains Ewan McRae, the first author of the study, who is now starting his own research group at the "Centre for RNA Therapeutics" at the Houston Methodist Research Institute in Texas, U.S..
Construction of a nanosatellite from RNA
To demonstrate the formation of complex shapes, the researchers combined RNA rectangles and cylinders to create a multi-domain "nanosatellite" shape, inspired by the Hubble Space Telescope.
"I designed the nanosatellite as a symbol of how RNA design allows us to explore folding space (possibility space of folding) and intracellular space, since the nanosatellite can be expressed in cells," says Cody Geary, assistant professor at iNANO, who originally developed the RNA-origami method.
However, the satellite proved difficult to characterize by cryo-EM due to its flexible properties, so the sample was sent to a laboratory in the U.S., where they specialize in determining the 3D structure of individual particles by electron tomography, the so-called IPET-method.
"The RNA satellite was a big challenge! But by using our IPET method, we were able to characterize the 3D shape of individual particles and thus determine the positions of the dynamic solar panels on the nanosatellite," says Gary Ren from the Molecular Foundry at Lawrence Berkeley National Laboratory, California, U.S..
The future of RNA medicine
The investigation of the RNA origamis contributes to improving the rational design of RNA molecules for use in medicine and synthetic biology. A new interdisciplinary consortium, COFOLD, supported by the Novo Nordisk Foundation, will continue the investigations of RNA folding processes by involving researchers from computer science, chemistry, molecular biology, and microbiology to design, simulate and measure folding at higher time resolution.
"With the RNA design problem partially solved, the road is now open to creating functional RNA nanostructures that can be used for RNA-based medicine, or act as RNA regulatory elements to reprogram cells," says Ebbe Sloth Andersen.
More information: Ebbe Andersen, Structure, folding and flexibility of co-transcriptional RNA origami, Nature Nanotechnology (2023). DOI: 10.1038/s41565-023-01321-6. www.nature.com/articles/s41565-023-01321-6 | Biology |
Researchers from Western and Brown University have made groundbreaking progress towards identifying the root cause and potential therapy for preeclampsia.
The pregnancy complication affects up to eight per cent of pregnancies globally and is the leading cause of maternal and fetal mortality due to premature delivery, complications with the placenta and lack of oxygen.
The research, led by Drs. Kun Ping Lu and Xiao Zhen Zhou at Western, and Drs. Surendra Sharma and Sukanta Jash at Brown, has identified a toxic protein, cis P-tau, in the blood and placenta of preeclampsia patients.
According to the study published in Nature Communications, cis P-tau is a central circulating driver of preeclampsia -- a "troublemaker" that plays a major role in causing the deadly complication.
"The root cause of preeclampsia has (so far) remained unknown, and without a known cause there has been no cure. Preterm delivery is the only life-saving measure," said Lu, professor of biochemistry and oncology at Schulich School of Medicine & Dentistry. Lu is also a Western Research Chair in Biotherapeutics.
"Our study identifies cis P-tau as a crucial culprit and biomarker for preeclampsia. It can be used for early diagnosis of the complication and is a crucial therapeutic target," said Sharma, who recently retired from his Brown roles as a professor of pathology and laboratory medicine (research) and professor of pediatrics (research).
In 2016, Sharma, a leading preeclampsia researcher, and his team had identified that preeclampsia and diseases like Alzheimer's had similar root causes related to protein issues. This research builds on that finding.
Until now, cis P-tau was mainly associated with neurological disorders like Alzheimer's disease, traumatic brain injuries (TBI) and stroke. This association was discovered by Lu and Zhou in 2015 as a result of their decades of research on the role of tau protein in cancer and Alzheimer's.
An antibody developed by Zhou in 2012 to target only the toxic protein while leaving its healthy counterpart unscathed is currently undergoing clinical trials in human patients suffering from TBI and Alzheimer's Disease. The antibody has shown promising results in animal models and human cell cultures in treating the brain conditions.
The researchers were curious whether the same antibody could work as a potential treatment for preeclampsia. Upon testing the antibody in mouse models they found astonishing results.
"In this study, we found the cis P-tau antibody efficiently depleted the toxic protein in the blood and placenta, and corrected all features associated with preeclampsia in mice. Clinical features of preeclampsia, like elevated blood pressure, excessive protein in urine and fetal growth restriction, among others, were eliminated and pregnancy was normal," said Sharma.
Sharma and his team at Brown have been working on developing an assay for early detection of preeclampsia and therapies to treat the condition. He believes the findings of this study have brought them closer to their goal.
Black and Hispanic women more susceptible
The tragic death of American track and field champion Tori Bowie earlier this year put the spotlight on preeclampsia, which disproportionately impacts Black and Hispanic women.
A gold, silver and bronze medalist in the 2016 Olympic Games, Bowie, 32, was found dead in her bed on May 2, 2023, while approximately eight months pregnant. According to the autopsy report the complications may have involved eclampsia -- a severe form of preeclampsia.
"Research has shown that women of certain races have genes that could possibly lead to higher than average blood pressure levels, eventually creating conditions for preeclampsia during pregnancy. However, it's also true that in many low socio-economic countries there's no registry to record PE cases. So, its link to other environmental factors is still unclear," said Sharma.
Preeclampsia and the brain
Recent research has also thrown light on preeclampsia's long-term impacts and possible links to brain health.
"Preeclampsia presents immediate dangers to both the mother and fetus, but its long-term effects are less understood and still unfolding," said Sharma. "Research has suggested a heightened risk of dementia later in life for both mothers who have experienced preeclampsia and their children." However, the causal link between preeclampsia and dementia is not known.
The researchers say this new study has pinpointed a potential underlying cause of the complex relationship between preeclampsia and brain health.
"Our study adds another layer to this complexity. For the first time, we've identified significant levels of cis P-tau outside the brain in the placenta and blood of preeclampsia patients. This suggests a deeper connection between preeclampsia and brain-related issues," said Jash, the lead author of the study.
As researchers delve deeper, how our bodies respond to stress is also emerging as a potential factor in the onset of preeclampsia.
"Although genetics play a role, factors like stress could be an important piece of the puzzle. Understanding how stress and other environmental factors intersect with biological markers like cis P-tau may offer a more complete picture," said Jash, assistant professor of molecular biology, cell biology and biochemistry (research) and pediatrics (research) at Brown.
A stress-response enzyme called Pin1
In 1996 and 1997, Lu and Zhou made the groundbreaking discovery of Pin1, which turns out to be a stress-response enzyme. This is a specific protein in the cells that becomes active or changes its behaviour in response to stressors, such as environmental challenges, toxins or physiological changes.
"Pin1 plays a pivotal role in keeping proteins, including the tau protein, in the functional shape during stress. When Pin1 becomes inactivated, it leads to the formation of a toxic, misshapen, variant of tau -- cis P-tau," said Zhou, associate professor, pathology and laboratory medicine at Schulich Medicine & Dentistry.
Interestingly, Pin1 is a key player in cancer signalling networks, turning on numerous cancer-causing proteins and turning off many cancer-suppressing ones. Found in high levels in most human cancers, it's particularly active in cancer stem cells, which are thought to be central to starting and spreading tumors and are hard to target with existing treatments.
"Essentially, when Pin1 is activated, it can lead to cancer. On the other hand, when there's a decrease or deactivation in Pin1, it results in the formation of the toxic protein cis P-tau, which leads to memory loss in Alzheimer's and after TBI or stroke. Now, we've uncovered its connection to preeclampsia as well," said Zhou.
"The results have far-reaching implications. This could revolutionize how we understand and treat a range of conditions, from pregnancy-related issues to brain disorders," said Lu.
Lu and Sharma had met at Brown in 2019, where Lu was invited to give a lecture on his research. Following an engaging session and a few dinners together, a collaboration between the Western researchers and Brown was forged.
"Science surprises us. I had never thought of working on finding a therapy for preeclampsia. It also shows that a collaboration can be transformative."
Story Source:
Journal Reference:
Cite This Page: | Biology |
News ReleaseThursday, December 22, 2022
Technique provides model for studying genesis of age-related macular degeneration and other eye diseases. Growth of blood vessels across printed rows of an endothelial-pericyte-fibroblast cell mixture. By day 7, blood vessels fill in the space between the rows, forming a network of capillaries.Kapil Bharti, Ph.D., NEIScientists used patient stem cells and 3D bioprinting to produce eye tissue that will advance understanding of the mechanisms of blinding diseases. The research team from the National Eye Institute (NEI), part of the National Institutes of Health, printed a combination of cells that form the outer blood-retina barrier—eye tissue that supports the retina's light-sensing photoreceptors. The technique provides a theoretically unlimited supply of patient-derived tissue to study degenerative retinal diseases such as age-related macular degeneration (AMD). Video Audio-described version
"We know that AMD starts in the outer blood-retina barrier," said Kapil Bharti, Ph.D., who heads the NEI Section on Ocular and Stem Cell Translational Research. "However, mechanisms of AMD initiation and progression to advanced dry and wet stages remain poorly understood due to the lack of physiologically relevant human models." The outer blood-retina barrier is the interface of the retina and the choroid, including Bruch's membrane and the choriocapillaris.National Eye Institute The outer blood-retina barrier consists of the retinal pigment epithelium (RPE), separated by Bruch’s membrane from the blood-vessel rich choriocapillaris. Bruch's membrane regulates the exchange of nutrients and waste between the choriocapillaris and the RPE. In AMD, lipoprotein deposits called drusen form outside Bruch's membrane, impeding its function. Over time, the RPE break down leading to photoreceptor degeneration and vision loss. Bharti and colleagues combined three immature choroidal cell types in a hydrogel: pericytes and endothelial cells, which are key components of capillaries; and fibroblasts, which give tissues structure. The scientists then printed the gel on a biodegradable scaffold. Within days, the cells began to mature into a dense capillary network. On day nine, the scientists seeded retinal pigment epithelial cells on the flip side of the scaffold. The printed tissue reached full maturity on day 42. Tissue analyses and genetic and functional testing showed that the printed tissue looked and behaved similarly to native outer blood-retina barrier. Under induced stress, printed tissue exhibited patterns of early AMD such as drusen deposits underneath the RPE and progression to late dry stage AMD, where tissue degradation was observed. Low oxygen induced wet AMD-like appearance, with hyperproliferation of choroidal vessels that migrated into the sub-RPE zone. Anti-VEGF drugs, used to treat AMD suppressed this vessel overgrowth and migration and restored tissue morphology. “By printing cells, we’re facilitating the exchange of cellular cues that are necessary for normal outer blood-retina barrier anatomy," said Bharti. "For example, presence of RPE cells induces gene expression changes in fibroblasts that contribute to the formation of Bruch's membrane – something that was suggested many years ago but wasn’t proven until our model." Among the technical challenges that Bharti's team addressed were generating a suitable biodegradable scaffold and achieving a consistent printing pattern through the development of a temperature-sensitive hydrogel that achieved distinct rows when cold but that dissolved when the gel warmed. Good row consistency enabled a more precise system of quantifying tissue structures. They also optimized the cell mixture ratio of pericytes, endothelial cells, and fibroblasts. Co-author Marc Ferrer, Ph.D., director of the 3D Tissue Bioprinting Laboratory at NIH’s National Center for Advancing Translational Sciences, and his team provided expertise for the biofabrication of the outer blood-retina barrier tissues “in-a-well,” along with analytical measurements to enable drug screening. “Our collaborative efforts have resulted in very relevant retina tissue models of degenerative eye diseases,” Ferrer said. “Such tissue models have many potential uses in translational applications, including therapeutics development.” Bharti and collaborators are using printed blood-retina barrier models to study AMD, and they are experimenting with adding additional cell types to the printing process, such as immune cells, to better recapitulate native tissue. This press release describes a basic research finding. Basic research increases our understanding of human behavior and biology, which is foundational to advancing new and better ways to prevent, diagnose, and treat disease. Science is an unpredictable and incremental process— each research advance builds on past discoveries, often in unexpected ways. Most clinical advances would not be possible without the knowledge of fundamental basic research. To learn more about basic research, visit https://www.nih.gov/news-events/basic-research-digital-media-kit. NEI leads the federal government’s efforts to eliminate vision loss and improve quality of life through vision research…driving innovation, fostering collaboration, expanding the vision workforce, and educating the public and key stakeholders. NEI supports basic and clinical science programs to develop sight-saving treatments and to broaden opportunities for people with vision impairment. For more information, visit https://www.nei.nih.gov. About the National Institutes of Health (NIH):
NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit www.nih.gov. NIH…Turning Discovery Into Health® ### | Biology |
Feline researchers have long believed that purring is produced by voluntary muscle contractions, but a new report indicates that this vibration in the larynx of cats may be explained by the myoelastic aerodynamic theory of phonation.Studies on the complex action that produces a unique vibration in the larynx of cats—known as purring to most of us—have taken an important turn. It turns out that the biomechanics of the sounds emitted by domestic cats when they feel comfortable or stressed may be closer to a snore than a voluntary muscle spasm.New research published in Current Biology suggests that connective tissue masses are embedded in the vocal folds of the larynges of domestic cats. These may allow felines to produce self-sustained low-frequency oscillations without neural input or muscular contractions. The researches found that anatomical adaptations—the “pads” of tissue in the vocal fold—respond to air entering the lungs.What Is a Purr, Really?Voluntary muscle contractions were thought to cause the vibratory component of purring. A contraction is initiated when the nervous system generates a signal that travels through a motor neuron to a neuromuscular junction. Once there, it releases a chemical message that tenses the fibers and triggers a movement.The authors of the new study suggest that purring instead results from the laryngeal pads of cats. This is in keeping with the myoelastic aerodynamic theory, which states that vocal fold oscillation is produced as a result of asymmetric forcing functions over closing and opening portions of the glottal cycle. The team argues that the flow of air entering and leaving the lungs activates the vibrations of the vocal cords, producing sounds like that of a human’s voice and characteristic sounds in animals. To test this, the scientists experimented on eight larynges that had been removed from domestic cats (all had been humanely euthanized when diagnosed with terminal illnesses). Their phonetic systems were housed in vertical tubes that supplied warm, moist air similar to the air that enters the body when breathing. The researchers were able to elicit the low-frequency phonation characteristic of purring without neural stimulation.The study does not rule out the possibility that muscle contractions play a part in purring, but the team argues that there is insufficient evidence to conclude that it’s the sole cause of purring. Instead the research indicates that air dynamics may trigger the vibration mechanism.Why Do Cats Purr?Cats purr their entire lives, beginning when they are kittens. Science has not yet fully understood why they purr in every circumstance, but biologists, veterinarians, and animal scientists have reached some general conclusions:Kittens purr so their mothers can find themPurring encourages the healing of woundsPurring produces serotonin, which is why it is often compared to human smilesDomesticated cats don’t only purr when they are content, but also when they are stressedThe article’s conclusions have sparked some controversy. Biomechanical engineers interviewed by Science claim that the experiment was limited to verifying the functioning of the larynx in isolation, without taking into account the complex systems of a living cat, which they feel represents a significant oversight. Scientist David Rice, for instance, compared the research to removing the mouthpiece of a wind instrument and then analyzing the noise it produces independently from the context of that instrument. | Biology |
Slumbering among thousands of bacterial strains in a collection of natural specimens at The Herbert Wertheim UF Scripps Institute for Biomedical Innovation & Technology, several fragile vials held something unexpected, and possibly very useful.
Writing in the journal Nature Chemical Biology, a team led by chemist Ben Shen, Ph.D., described discovery of two new enzymes, ones with uniquely useful properties that could help in the fight against human diseases including cancer. The discovery, published last week, offers potentially easier ways to study and manufacture complex natural chemicals, including those that could become medicines.
The contribution of bacterial chemicals to the history of drug discovery is remarkable, said Shen, who directs the Natural Products Discovery Center at the institute, one of the world's largest microbial natural product collections.
"Few people realize that nearly half of the FDA-approved antibiotics and anticancer drugs on the market are natural products or are inspired by them," Shen said. "Nature is the best chemist to make these complex natural products. We are applying modern genomic technologies and computational tools to understand their fascinating chemistry and enzymology, and this is leading to progress at unprecedented speed. These enzymes are the latest exciting example."
The enzymes the team discovered have a descriptive, if unwieldy, name. They are called "cofactorless oxygenases." This means the bacterial enzymes pull oxygen from the air and incorporate it into new compounds, without requiring the typical metals or other cofactors to initiate the necessary chemical reaction.
This new way of synthesizing defensive substances would confer a survival advantage, enabling the organism to fend off infections or invaders. And because enzymes are to chemists what drill bits or saw blades are to a carpenter, they offer scientists new ways to create useful things, said the paper's first authors, postdoctoral researchers Chun Gui, Ph.D., and Edward Kalkreuter, Ph.D.
Most immediately, the discovery of the enzymes, TnmJ and TnmK2, solves a lingering mystery of how a potential antibiotic and anticancer compound the Shen lab had first discovered in 2016, tiancimycin A, achieved such potency, Gui and Kalkreuter said.
The enzymes enable the bacteria to produce compounds for targeting and breaking up DNA, Gui said. This would be immensely useful in fighting off a virus or other germ -- or killing cancer.
Tiancimycin A is being developed as part of a cancer-targeting antibody therapy. These types of combined antibody-drug therapeutics represent a rapidly growing new approach to fighting cancer. But a critical step to using tiancimycin A as an antibody's payload is making enough to study it at a larger scale. That proved challenging.
"Even after we identified genes responsible for encoding tiancimycin A, several of the steps required to synthesize it could not be predicted," Gui said. "The two enzymes described in the current study are highly unusual."
Tiancimycin A was first found in a soil-dwelling bacteria, a type of Streptomyces from the strain collection at the Natural Products Discovery Center. To make its powerful chemical weapon, the organism had to solve a problem. It somehow had to break three highly stable carbon-carbon bonds and replace them with more reactive carbon-oxygen bonds. For a long time, the scientists couldn't understand how the bacteria managed that feat.
Cracking the mystery involved finding other tiancimycin A-like natural product-producing bacteria among the institute's Natural Products Discovery Center collection of 125,000 bacterial strains, and analyzing their genomes to search for the evolutionary hints.
The historic collection had long been housed in a pharmaceutical company's basement, collected over decades following the discovery of penicillin in the scientific community's hopeful rush to find the next great antibiotic. The collection did generate several historically important drugs through the years, including the tuberculosis antibiotic streptomycin and the organ transplant drug sirolimus. But the majority of the collection's freeze-dried bacterial strains had rested in their glass vials, unexplored.
In 2018, Shen won a competition for the collection, so that it could be fully investigated in an academic setting, where it would be open to science. His team is now developing ways to study the strains, read their genomes and deposit the information into a searchable database for the scientific community to access. Modern genome sequencing and bioinformatics techniques are proving that there may be as many as 30 interesting gene clusters in each strain of bacteria they study, and many of them code for natural products never before documented by scientists, said Shen, who is a member of the UF Health Cancer Center.
The discovery of the new cofactorless enzymes is but the latest example of the chemical riches that lie within The Wertheim UF Scripps Institute's collection, Shen said. Their discovery has sparked new excitement about further investigating the reasons the unique chemistry evolved, and the ways it may prove useful.
"This publication underscores how many surprises nature still has for us," Shen said, "It can teach us much about fundamental chemistry and biology and provide us with the tools and inspiration we need to translate laboratory findings into medicines that impact society and address many problems faced by humanity."
Journal Reference:
Cite This Page: | Biology |
Wild female chimpanzees in Uganda live well past the point when they can reproduce and probably go through menopause similar to humans, a new study has found. The finding raises fresh questions about why humans experience menopause.
Until now, humans were one of only three animal species known to go through menopause — along with orcas (Orcinus orca) and short-finned pilot whales (Globicephala macrorhynchus). Humans were thought to be the only primates that don't stay fertile for their entire lives.
"How this life history evolved in humans is a fascinating yet challenging puzzle," study lead author Brian Wood, an associate professor and evolutionary anthropologist at the University of California Los Angeles, said in a statement.
That's because the inability to reproduce past a certain age has no obvious evolutionary advantage. To explain it, researchers have previously posited that postmenopausal people may play an important role in caring for their children's children and boosting their survival chances, helping to ensure that their genes will be passed on — an idea known as the "grandmother hypothesis."
To find out whether menopause occurs in other primates, the authors of a study published Thursday (Oct. 26) in the journal Science investigated the fertility of some of our closest living relatives — eastern chimpanzees (Pan troglodytes schweinfurthii).
Wood and his colleagues pored over 21 years' worth of demographic and fertility data collected between 1995 and 2016 in Uganda's Kibale National Park, where the Ngogo community of wild chimpanzees lives. The researchers analyzed records for 185 female chimpanzees. They found a decline in fertility from the age of 30 onward and no births after 50, despite several females living long past that point
It turns out that Ngogo female chimps spend one-fifth of their adult lives in a "post-reproductive state" — around half the proportion calculated for modern human hunter-gatherers, such as the Hadza people. Urine samples taken from 66 female chimpanzees in different reproductive phases (ages 14 to 67) also revealed hormonal changes as they got older and stopped having babies — similar to those seen in humans going through menopause.
But unlike many humans, female chimpanzees don't stay with their original tribe to reproduce and instead disperse to other groups, leaving behind their aging mothers. The grandmother hypothesis therefore has no legs to stand on in chimpanzees.
Instead, "the results show that under certain ecological conditions, menopause and post-fertile survival can emerge within a social system that's quite unlike our own and includes no grandparental support," Wood said.
While the new finding in chimps doesn't rule out the grandmother hypothesis applying to humans, it raises questions about menopause's origins in our species.
Chimpanzees and humans may have inherited genes inscribing menopause from a common ancestor, according to the study. Alternatively, the trait may have evolved independently in each species.
If that's the case, the new study provides "a solid basis for considering the roles that improved diets and lowered risks of predation would have played" in the evolution of menopause in humans, Wood said.
That's because chimpanzees in Kibale National Park have never had it so good. Hunters wiped out their only predators, leopards, in the 1960s, and humans no longer kill the chimps either. Ngogo chimpanzees also have plenty of fruit and eat more meat than neighboring chimp communities, the researchers wrote in the study.
This good life might explain why female chimpanzees there live long past their fertile years. Although non-reproductive females exist in other wild chimpanzee communities, only a few have lived beyond the age of 50, according to the study.
It's unclear whether the signs of menopause detected in chimps arise solely from "unusually favorable ecological conditions" or the apes evolved that way. Recent environmental changes and disease epidemics shortening their life spans may have, until now, erased the evidence of an evolutionary history that includes menopause, according to the study.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Sascha is a U.K.-based trainee staff writer at Live Science. She holds a bachelor’s degree in biology from the University of Southampton in England and a master’s degree in science communication from Imperial College London. Her work has appeared in The Guardian and the health website Zoe. Besides writing, she enjoys playing tennis, bread-making and browsing second-hand shops for hidden gems. | Biology |
Humans are hoping to colonize Mars in the near future, with NASA aiming to reach the Red Planet by 2040. But what will the long-haul space missions needed to get there do to the human body?
Our species evolved to thrive on the Earth, within its protective atmosphere and gravitational pull, not to survive in the unique cosmic environments beyond our planet. Some scientists have even suggested that visiting other planets may require humanity to tweak its DNA to boost our resilience against the dangers of spaceflight.
Many aspects of space exploration are detrimental to human health. One of the biggest obstacles to long-term spaceflight is microgravity, the state of near-complete weightlessness in which astronauts float and can push heavy objects through the air with ease. Another concern is cosmic radiation, or high-energy particles that zoom through space at nearly the speed of light. Not to mention the many risks that can stem from living in prolonged isolation and in the tight confines of a spacecraft.
Here, we list 10 ways the body changes in space — usually, for the worse.
1. Muscle loss
Weight-bearing movement is essential to growing and maintaining muscles. In a weightless environment, muscles get too little stimulus and begin to rapidly weaken and deteriorate. Astronauts can lose up to 20% of their muscle mass while spending as little as five days in microgravity, according to NASA.
Muscle loss in space occurs primarily in body parts responsible for walking and posture support, such as lower limbs and the trunk. Studies suggest this phenomenon is a direct result of muscle cells making fewer proteins, rather than a degradation of existing muscle fibers, according to a 2021 review published in the journal npj Microgravity.
2. Bone loss
The human skeleton also relies on weight-bearing exercises to maintain its mass and density. Astronauts can suffer decades' worth of bone loss after spending six or more months in space, which makes them more prone to bone fractures and osteoporosis.
Interestingly, microgravity's effects on specific bones may depend on their location in the body. Bones in the lower limbs and the lumbar spine may lose up to 1% of mass per month a person spends in space, while the density of the skull bones can actually increase, according to a 2020 meta-analysis published in the journal npj Microgravity. In space, there is no force pulling the body and its internal fluids down towards the Earth, which in turn may affect the distribution of factors that control the formation of bone tissue, the meta-analysis authors noted.
As bone tissue rapidly degrades in space, it can releases a flood of minerals into the blood, elevating the risk of hypercalcemia (excessive levels of calcium), which in turn can cause kidney stones, according to a 1995 review published in the journal Acta Astronautica.
3. Vision problems
Eyes are undoubtedly some of the most delicate and complex organs in the human body, so it comes as no surprise that going into space can have a damaging effect on our eyes and sense of vision. For example, the nerves that extend from the back of the eye may change in microgravity and then warp upon being returned to Earth gravity.
Vision is also affected by several factors including Earth's gravity. Gravitational forces help keep the eyeballs in their correct positions and allow them to swivel in the eye sockets, according to a 2009 review published in the journal Annals of the New York Academy of Sciences. In microgravity, these eye movements may be disrupted, according to a 2006 study published in the journal Human Physiology. Researchers examined astronauts who took part in long-haul missions on the International Space Station, before and after their flights. They found that long periods in microgravity lead to a significant change in the accuracy and speed of eye rotations, which in turn may impair the astronauts' ability to visually track objects, the study authors wrote.
Prolonged exposure to microgravity can also lead to a degenerative condition called Spaceflight Associated Neuro-ocular Syndrome (SANS), the symptoms of which include flattening of the eyeball, white lesions on the eye's innermost layer known as "cotton wool spots," and other tissue damage to various parts of the eye.
4. Back pain
Astronauts often complain of back pain after returning home from long-haul space flights. The culprit driving this pain is microgravity and its profound effect on the human spine.
Earth's gravity keeps the spinal column compressed and in its typical, slightly curved shape. In microgravity, the spinal column elongates and somewhat straightens. In fact, astronauts can "grow" up to three inches (7.6 centimeters) in a weightless environment, according to NASA.
The human spine is flexible, so short space missions are unlikely to cause lasting damage. However, prolonged stints in microgravity may weaken muscles that support their vertebrae. In addition, weightlessness may lead to the degeneration of their intervertebral discs, the shock-absorbing cushions located in between vertebrae, according to a 2023 review published in the journal Frontiers in Physiology.
Intervertebral disc degeneration in space appears to be caused by water loss. Under normal gravity conditions, the spine is compressed, which causes the discs to expel water throughout the day. During sleep, in a horizontal position, the gravity load is lost and the discs can rehydrate. This turnover allows the disc to maintain optimal levels of hydration and thus preserve its structure and functionality. In microgravity, however, this daily fluctuation is lost, the review authors wrote.
5. Lower immunity
The cosmic radiation, microgravity and overall physical and mental stress involved in space travel can weaken astronauts' immune systems and thus make them more susceptible to infections and systemic diseases.
Prolonged exposure to microgravity can reduce the number and function of macrophages, a type of white blood cell that kills harmful microbes and regulates the action of other immune system cells, according to a 2021 review published in the journal npj Microgravity. Weightlessness has a profound impact on macrophage metabolism, growth and reproduction, as well as the modes of communication between macrophages and the rest of the body's immune system, the review authors wrote.
Moreover, mounting evidence hints that a weightless environment may cause various species of microbes to cause more severe disease and become resistant to treatment, although this has mostly been shown in lab dish studies, according to a 2021 review published in the journal Life.
6. Increased risk of blood clots
Just like any other muscle, the heart relies on the continuous tug of Earth's gravity to stay strong and functional. Gravity pulls the blood in the body down towards the planet's center, forcing the heart to contract strongly enough to propel the blood upwards through the body. Microgravity takes this force, which may lead to astronauts' hearts becoming smaller over time.
But a shrinking heart is not the only potential effect of long-haul space missions on the human cardiovascular system: Evidence is growing that microgravity may also increase the risk of dangerous blood clots.
Studies suggest that this risk may arise because microgravity is linked to reduced blood flow across the whole body and increased presence of blood clotting factors. In addition, a weightless environment may cause dysfunction in the tissues lining blood vessels, which would theoretically contribute to the risk of blood clots during spaceflight, according to a 2021 review published in the journal Experimental Physiology.
7. Increased levels of inflammation
Long-haul space missions may increase the overall levels of inflammation in the body, according to the NASA Twins Study, and such elevated inflammation has been tied to conditions like heart disease and insulin resistance. Astronauts Scott and Mark Kelly are identical twin brothers. At one point, Scott was sent on a one-year space mission while Mark remained on Earth, and scientists seized this unique opportunity to compare how their bodies reacted to the vastly different environments.
Among many other tests, researchers compared the brothers' levels of cytokines, proteins in the blood that indicate inflammatory responses. They found that Scott's body was more prone to inflammation in microgravity than Mark's was on Earth. Moreover, one type of cytokine in Scott's blood remained elevated for almost six months upon returning home from space. The team also saw signs of atherosclerosis (artery narrowing due to plaque buildup) in Scott that did not appear in Mark and noted that this narrowing might have been linked to the observed inflammation.
8. DNA damage
Astronauts face an increased risk of DNA damage, mainly due to the exposure to cosmic radiation and microgravity, according to a 2017 review published in the journal npj Microgravity. The charged particles of cosmic rays can damage DNA strands directly or indirectly through the production of free radicals, a type of unstable molecule. Microgravity, on the other hand, can disrupt natural DNA repair processes, further increasing the risk of genetic mutations, the review authors wrote.
Unique conditions onboard a spaceflight, such as frequent contact with toxic chemicals (for example, dust particles covering the surface of celestial objects or certain components of a spacecraft) and lack of fresh air may also add to this harmful effect. As such, long-haul space missions may lead to an accumulation of genetic mutations, increasing the risk of cancer, cystic fibrosis, sickle cell anemia and other adverse health effects, the review authors noted.
9. Poor gut health
The human gastrointestinal tract is home to trillions of microbes that can influence people's digestive function, immune responses, metabolism and nerve signaling, among other bodily functions. The gut microbiome continuously changes in response to external factors, such as one's diet and psychological stress levels, and spaceflight may also affect gut health, according to a 2021 review published in the journal Life.
Astronauts tend to have a less diverse population of gut microbes compared to people on Earth, and often host a higher abundance of bacterial species that promote intestinal inflammation, such as Faecalibacterium and Parasutterella, according to the review. Scott of the NASA Twin Study also showed profound changes in his gut microbiome during spaceflight, but his gut returned to normal on Earth.
In addition, a 2023 mice study published in the journal Cell Reports has demonstrated that spaceflight-induced changes in gut microbiome may speed the rate of bone loss in microgravity. However, more research is needed to understand how and whether this mechanism works in humans.
10. Changes in the brain's structure and activity
Long-haul space missions may "rewire" the brains of astronauts. The driving force behind this effect is likely microgravity.
Weightlessness causes the cerebrospinal fluid — a watery substance that cushions and provides nutrients to the brain and spinal cord — to shift around. This in turn can alter the shape and weight of the brain's white and gray matter. Changes in the brain's structure and activity may still be present several months after astronauts land back on Earth. At the same time, scientists are unsure exactly how detrimental these alterations might be to human health.
In addition, long-haul space missions can change how different parts of the brain communicate with each other, according to a 2023 study published in the journal Communications Biology.
Researchers collected brain scans from 13 astronauts before spaceflight, shortly after they returned home, and then again eight months later, and they found that these connectivity changes may persist in astronauts long after they return to Earth. Some connectivity changes can be seen in motor areas of the brain, which control movement and likely change to adapt to the challenges of weightlessness.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Anna Gora is a health writer at Live Science, having previously worked across Coach, Fit&Well, T3, TechRadar and Tom's Guide. She is a certified personal trainer, nutritionist and health coach with nearly 10 years of professional experience. Anna holds a Bachelor's degree in Nutrition from the Warsaw University of Life Sciences, a Master’s degree in Nutrition, Physical Activity & Public Health from the University of Bristol, as well as various health coaching certificates. She is passionate about empowering people to live a healthy lifestyle and promoting the benefits of a plant-based diet. | Biology |
Researchers shed further light onto zinc homeostasis in cells
A research group has unearthed how zinc transporter complexes regulate zinc ion (Zn2+) concentrations in different areas of the Golgi apparatus and revealed that this mechanism finely tunes the chaperone protein ERp44.
The findings, which were reported in the journal Nature Communications on May 9, 2023, reveal the crucial chemical and cellular biological mechanism at play behind zinc homeostasis, something necessary for avoiding fatal diseases such as diabetes, cancers, growth failures, and immunodeficiency.
As a trace element, zinc is essential for our health. Zn2+ are vital for enzyme catalysis, protein folding, DNA binding, and regulating gene expression, with nearly 10% of human proteome binding Zn2+ for their structural maturation and function.
Secretory proteins like hormones, immunoglobulins, and blood clotting factors are synthesized and folded in the endoplasmic reticulum (ER), a complex membrane network of tubules. Subsequently, they are transported to and matured in the Golgi apparatus, the organelle composed of multiple flattened sacs called cisternae, which sorts and processes proteins before directing them to a specific destination. Chaperone proteins are vital for maintaining protein homeostasis and preventing the formation of misfolded or aggregated proteins in these organelles.
The group's previous research demonstrated that Zn2+ in the Golgi apparatus plays an essential part in protein quality control in the early secretory pathway comprising the ER and Golgi. This system is mediated by the ER-Golgi cycling chaperone protein ERp44.
In the Golgi apparatus, there exists three ZnT complexes: ZnT4, Znt5/6, and ZnT7. Yet, until now, mechanisms of how Zn2+ homeostasis is maintained in the Golgi apparatus has remained unclear.
"Using chemical biology and cell biology approaches together, we revealed that these ZnT complexes regulate the Zn2+ concentrations in the different Golgi compartments, namely cis, medial, and trans-Golgi cisternae," says Kenji Inaba, a corresponding author of the study and professor at Tohoku University's Institute of Multidisciplinary Research for Advanced Materials Sciences. "We also further elucidated the intracellular transport, localization, and function of ERp44 controlled by ZnT complexes."
ERp44 captures immature secretory proteins at the Golgi apparatus to prevent their abnormal secretion. Previous studies have shown that mice with the expression of ERp44 suppressed suffer from heart failure and hypotension.
Additionally, many secretory zinc enzymes are related to various diseases, including metastasis of cancer cells and hypophosphatasia. These enzymes depend on the Golgi-resident ZnT complexes to acquire Zn2+ for enzymatic activity. Male mice with ZnT5 suppressed have experienced death caused by arrhythmias, so there is possible relevance of Zn2+ homeostasis to cardiovascular disease.
"Our findings will help us understand the mechanism by which disruptions of Zn2+ homeostasis in the early secretory pathway leads to the development of pathological conditions," adds Inaba.
The group are hopeful that the strategies employed in their study can paint a bigger picture of the mechanisms underlying the maintenance of intracellular Zn2+ homeostasis, recommending future studies that can measure Zn2+ in other organelles such as the mitochondria and nucleus.
More information: Yuta Amagai et al, Zinc homeostasis governed by Golgi-resident ZnT family members regulates ERp44-mediated proteostasis at the ER-Golgi interface, Nature Communications (2023). DOI: 10.1038/s41467-023-38397-6
Journal information: Nature Communications
Provided by Tohoku University | Biology |
Computer-aided detection: A patient treated with SRS to three prospectively identified metastases (PIMs, lower panel) also harboured three retrospectively identified metastases (RIMs, upper panel, white arrows). These RIMs were revealed in an MR scan seven weeks later when they were larger, prompting treatment with a second course of SRS. All PIMs and RIMs were correctly detected and segmented by CAD in the original MRI (cyan contours), with zero false positives. (Courtesy: Devon Godfrey, Duke University)
Researchers at Duke University Medical Center have developed a deep-learning-based computer-aided detection (CAD) system to identify difficult-to-detect brain metastases on MR images. The algorithm exhibited excellent sensitivity and specificity, outperforming other CAD systems in development. The tool shows potential to enable earlier identification of emerging brain metastases, allowing them to be targeted with stereotactic radiosurgery (SRS) when they first appear and, for some patients, reducing the number of required treatments.
SRS, which uses precisely focused photon beams to deliver a high dose of radiation to targets in the brain in a single radiotherapy session, is evolving into the standard-of-care treatment for patients with a limited number of brain metastases. To target a metastasis, however, it must first be identified on an MR image. Unfortunately, approximately 10% are not, 30% for those less than 3 mm in size, even when reviewed by expert neuroradiologists.
When these undiscovered brain metastases – which the researchers refer to as retrospectively identified metastases (RIMs) – are identified on subsequent MRI scans, a second SRS treatment is usually needed. Such treatment is expensive, and can be uncomfortable and invasive, sometimes requiring head immobilization with a frame secured to the skull by pins.
At the recent ASTRO Annual Meeting, Devon Godfrey explained that the researchers designed the convolutional neural network (CNN)-based CAD system specifically to improve the detection and segmentation of hard-to-detect RIMs and very small prospectively identified metastases (PIMs). Godfrey and colleagues describe the testing and validation of this system in the International Journal of Radiation Oncology Biology Physics.
The team trained the CAD tool on MRI data (a contrast-enhanced spoiled gradient echo sequence) from 135 patients with 563 brain metastases. The images were acquired using 1.5 T and 3.0 T MRI scanners from different vendors at multiple Duke Health locations. In total, the data set included 491 PIMs with a median diameter of 6.7 mm, and 72 RIMs from 32 patients, with a median diameter of 2.7 mm.
To identify RIMs, the researchers reviewed each patient’s original MR images to search for signs of contrast enhancement in the exact location where a metastasis was later detected. After review, they classified each RIM as either having met imaging-based diagnostic criteria (+DC) or having insufficient visual information (-DC) to be identified as a metastasis.
The researchers randomized the data set of RIMs and PIMs into five groups, using four of these for model and algorithm development and one as a test group. “The inclusion of both +DC and -DC RIMs resulted in the highest sensitivities for every brain metastasis category and size, while also returning the lowest false-positive rate and the highest positive predictive value,” they report. “This shows a clear benefit of including an overweighted sampling of small challenging brain metastases to CAD training data.”
For PIMs and +DC RIMs – which have clear characteristics of metastases on MRI – the model achieved an overall sensitivity of 93%, ranging from 100% for lesions larger than 6 mm in diameter to 79% for those smaller than 3 mm. The false-positive rate was also impressively low, with a mean of 2.7 per person, compared with between eight and 35 in other CAD systems with comparable detection sensitivity for small lesions.
The CAD system was also able to detect some of the -DC RIMs in both the development and test sets. Identification of brain metastases at this earliest stage would be a great clinical advantage, as such lesions could then be more thoroughly monitored with imaging, prompting treatment if required.
The Duke team is now working to improve the CAD tool’s accuracy by utilizing multiple MR sequences. Godfrey explains that brain MRI studies almost always include multiple MR sequences that produce unique information about every voxel in the brain. “We believe that incorporating the additional information available from these other sequences ought to improve its accuracy,” he says.
Godfrey notes that the researchers are just weeks away from launching a simulated prospective clinical use study of the existing CAD system to investigate how the tool impacts clinical decision making by both radiologists and radiation oncologists. Read more Deep learning helps radiologists detect lung cancer on chest X-rays “Multiple expert neuroradiologists and neuro-radiation oncologists who perform SRS will be presented with brain MR scans. They will be asked to find any lesion that might be a brain metastasis, rate their confidence level that it is, and state whether they would treat the lesion with SRS, based upon its appearance in the images,” he tells Physics World. “We will then present them with the CAD predictions and evaluate the impact of CAD on each physician’s clinical decisions.”
If this simulation study yields promising results, Godfrey anticipates deploying the CAD tool to help identify challenging brain metastases prospectively in new patients being treated in the Duke Radiation Oncology clinic under a research protocol, perhaps as soon as mid-year 2023. | Biology |
Scientists have discovered the deepest known evidence of coral reef bleaching, more than 90 metres below the surface of the Indian Ocean.
The damage – attributed to a 30% rise in sea temperatures caused by the Indian Ocean dipole – harmed up to 80% of the reefs in certain parts of the seabed, at depths previously thought to be resilient to ocean warming.
However, scientists say it serves as a stark warning of the harm caused in our ocean by rising ocean temperatures, and also of the hidden damage being caused throughout the natural world as a result of climate change.
The findings, highlighted in a study published in Nature Communications, were discovered by researchers from the University of Plymouth.
Dr Phil Hosegood, Associate Professor in Physical Oceanography at the University of Plymouth and lead on the project, said: “There are no two ways about it, this is a huge surprise. Deeper corals had always been thought of as being resilient to ocean warming, because the waters they inhabit are cooler than at the surface and were believed to remain relatively stable. However, that is clearly not the case and – as a result – there are likely to be reefs at similar depths all over the world that are at threat from similar climatic changes.”
Researchers from the University have been studying the Central Indian Ocean for well over a decade, with their work supported by the Garfield Weston Foundation and the Bertarelli Foundation.
On their research cruises, they use a combination of in situ monitoring, underwater robots and satellite-generated oceanographic data to understand more about the region’s unique oceanography and the life it supports.
The first evidence of the coral damage was observed during a research cruise in November 2019, during which scientists were using remotely operated underwater vehicles equipped with cameras to monitor the coral health below the ocean surface.
Images from the underwater cameras were being transmitted live onto the research vessel, and gave the research team its first glimpse of the corals that had been bleached. Conversely, at the same time as the deeper reefs were bleaching, they observed shallow water reefs exhibiting no sign of harm.
Over the subsequent months, the researchers assessed a range of other data collected during the research cruise and information from satellites monitoring the ocean conditions and temperatures.
It highlighted that while temperatures on the ocean surface had barely changed during the period, temperatures beneath the surface had climbed from 22°C to 29°C due to the thermocline deepening across the equatorial Indian Ocean.
Clara Diaz, the lead author on the study, said: “What we have recorded categorically demonstrates that this bleaching was caused by a deepening of the thermocline. This is down to the regional equivalent of an El Nino, and due to climate change these cycles of variability are becoming amplified. Moving forward, bleaching in the deeper ocean here and elsewhere will likely become more regular.”
Dr Nicola Foster, Lecturer in Marine Biology and study co-author, added: “Our results demonstrate the vulnerability of mesophotic coral ecosystems to thermal stress and provide new evidence of the impact that climate change is having on every part of our ocean. Increased bleaching of mesophotic corals will ultimately lead to coral mortality and a reduction in the structural complexity of these reefs. This will likely result in a loss of biodiversity and a reduction in the critical ecosystem services that these reefs provide to our planet.”
Researchers from the University returned to the same areas during planned cruises in 2020 and 2022, and found that large parts of the reef had recovered.
In spite of this, they say, it is critically important to increase monitoring of the seafloor in the deep ocean, even if it is a hugely challenging and complicated undertaking.
With damage to shallow water corals increasing in frequency and severity, it had been expected that mesophotic corals – found between 30-150m under the surface – would plug the gap in terms of delivering ecosystem benefits.
However, this research highlights that may not be the case – and with deep water corals all over the planet remaining largely understudied, similarly damaging incidences of bleaching could be going unnoticed.
Dr Hosegood added: “The oceanography of a region is impacted by naturally occurring cycles that are becoming amplified by climate change. Currently, the region is suffering similar, if not worse, impacts due to the combined influence of El Nino and the Indian Ocean Dipole. While there is no way we can stop the thermocline from deepening, what we can do is expand our understanding of the impacts that these changes will have throughout these environments of which we have so little knowledge. In the face of fast-paced global change, that has never been more urgent.”
Journal
Nature Communications
Method of Research
Observational study
Subject of Research
Not applicable
Article Title
Mesophotic coral bleaching associated with changes in thermocline depth
Article Publication Date
16-Oct-2023 | Biology |
On the menu tonight, a nice, nutritional, bacteria-killing virus. Sounds unappealing? It may not be to your cells.
In a new study, scientists revealed that a type of bacteriophage — a virus that infects and kills bacteria — found in the human gut helps mammal cells grow and thrive in what could be a symbiotic relationship. That's a surprise, as other bacteriophages (phages for short) are known to trigger inflammatory responses when they encounter mammalian cells.
This phenomenon, described Thursday (Oct. 26) in the journal PLOS Biology, was only demonstrated in cells in the lab. However, the authors hope the findings will aid future research that could impact human health, such as supplementing studies investigating phage therapy to treat infections with antibiotic-resistant superbugs.
"[The study] opens up a new area of symbiosis and symbiotic interactions between phages and mammalian cells," senior study author Jeremy Barr, an associate professor of biological sciences at Monash University in Australia, told Live Science. "I think this study suggests that there may be a lot more that we're unaware of."
Phages are the most abundant biological entities on the planet. They're extremely small, with most ranging in size from around 24 to 200 nanometers; to put that in perspective, a penny is about 19 million nanometers long. They're made up of a DNA or RNA genome surrounded by a protein shell. Although interactions between phages and bacteria are relatively well studied, the same can't be said for those between the former and mammalian cells.
In the study, the authors looked at a well-known phage species called T4 that normally infects Escherichia coli bacteria. They applied T4 to three types of mammalian cells in the lab: an immune cell called a macrophage that had been extracted from mouse tissue; and human lung and dog kidney cells derived from cancer cell lines.
The T4 phages didn't activate DNA-mediated inflammatory processes in the cells. Instead they triggered signaling pathways that promote cell growth and survival, resulting in increased cellular metabolism and the reorganization of actin, a protein that is found in the fluid-filled space inside mammalian cells. Actin reorganization is needed for cells to uptake material via macropinocytosis, a phenomenon also known as "cell drinking."
The broader health impacts of the study are still unknown, Barr said. The authors also only looked at one phage species, while estimates suggest there are as many as 10^15 phages in the gut). In addition, the results may be a side effect of using immortalized cancer cell lines, which are already more likely to grow and proliferate, he said.
Nevertheless, the find should spur follow-up research. Phage therapy is generally considered to be safe, although it's still early in the clinical trial process, and the current study now suggests that there's "many, many other potential impacts" that phages may have on human cells, Barr said.
Another avenue where the research could be applied is in the gut microbiome.
"There's some really interesting research showing that there's certain gut communities associated with inflammatory disorders — IBD [inflammatory bowel disease], Crohn's disease — that have virus signatures associated with them," Barr said.
"This is very much conjecture and extrapolation but it's interesting to think that maybe phages do play a role in this and there may be some inflammatory interactions, and maybe some also beneficial interactions in a more sort of homeostasis gut microbiome system," he said.
Ever wonder why some people build muscle more easily than others or why freckles come out in the sun? Send us your questions about how the human body works to [email protected] with the subject line "Health Desk Q," and you may see your question answered on the website!
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Emily is a health news writer based in London, United Kingdom. She holds a bachelor's degree in biology from Durham University and a master's degree in clinical and therapeutic neuroscience from Oxford University. She has worked in science communication, medical writing and as a local news reporter while undertaking journalism training. In 2018, she was named one of MHP Communications' 30 journalists to watch under 30. ([email protected]) | Biology |
Karen Lips has never forgotten the silence. It was the early 1990s; she was finishing her PhD in tropical biology, and had come back to a research site in Costa Rica, a protected reserve high up in the mountains, after a short break. On her previous visit, the air had been full of the sounds of the frog species she was studying. Now, inexplicably, almost all the frogs were gone.She was mystified and alarmed, but she arranged to move her research sites further south in Central America, into the mountains of Panama and eventually as far south as the border with Colombia. Wherever she and her colleagues went, though, they found a wave of death preceding them. “By the time we got there,” she recalls, “it was already too late.”What Lips was seeing as a graduate student—she is now a tropical ecologist and a professor of biology at the University of Maryland College Park—was the arrival in the Americas of a fungal pandemic that had been sweeping the globe. Batrachochytrium dendrobatidis, a virulent spore-forming pathogen generally known as Bd, originated in Asia and probably spread for decades before its damage began to be noticed in the 1980s. Since then, scientists estimate, 90 amphibian species have been made extinct by the fungus, and more than 400 were severely harmed by it, losing up to 90 percent of their populations. Altogether, more than 6 percent of all amphibian species in the world were decimated or destroyed, a catastrophe that one research group has called “the greatest loss of biodiversity attributable to a disease.”Over the years, Lips and other scientists were able to document what happened to the ecosystems that lost those frogs and other amphibian species: surges in populations of insects (which the frogs would have eaten) and drops in populations of snakes (which would have eaten the frogs). But what looked to ecologists like profound environmental disruption was invisible to most of society, because it occurred far from human habitation, in locations where surveillance is patchy and expensive.Now, though, there’s evidence that the damage caused by Bd has rippled into the human world.In the journal Environmental Research Letters, Lips and several other researchers report that the devastation of frog species in Costa Rica and Panama caused an unforeseen surge in human malaria cases lasting eight years after the pathogen arrived—likely because, with no tadpoles to eat their larvae, mosquito populations boomed. It’s the first published evidence that the worldwide amphibian die-off has had implications for people.“This paper is a wake-up call,” says John Vandermeer, a professor of ecology and evolutionary biology at the University of Michigan, who was not involved in the study. “It makes the point that the problem is not just that we're losing biodiversity, and biodiversity is wonderful and pretty and beautiful. It’s that the loss of biodiversity does have secondary consequences on human welfare—in this particular case, human health.”Though Bd swept through Central America from the 1980s to the 2000s, the analysis that demonstrated its effect on human health could be accomplished only recently, says Michael Springborn, the paper’s lead author and a professor and environmental and resource economist at UC Davis. “The data existed, but it wasn’t easily obtainable,” he says. Over the years, though, county-level disease records were digitized at the ministries of health in Costa Rica and Panama, providing an opportunity to combine that epidemiology in a particular statistical model with satellite images and ecological surveys revealing land characteristics and precipitation, as well as with data on amphibian declines.“We always thought if we could link [the die-off] to people, more people would care,” Lips says. “We were pretty sure we could quantify changes in bugs, or frogs, or the water quality, or fish or crabs or shrimp. But making that connection to people was so difficult, because the effect was so diffuse, and it happened across such a large area.”But precisely because Bd swept through Central America in a specific pattern, from northwest to southeast—“a wave that hit county after county over time,” Springborn says—it created a natural experiment that allowed the researchers to look granularly at Costa Rica and Panama before and after the fungal wave arrived. In the health records, they could distinguish that malaria rates were flat in counties (called cantons or distritos) before the Bd fungus tore through, then began to rise afterward. At the peak of the disease surge, six years from the arrival of Bd in an area, malaria cases rose five-fold.And then they began to fall off again, beginning about eight years after the lethal fungus arrived. Researchers aren’t sure why, because most amphibian populations haven’t bounced back from the fungal onslaught. Though some populations appear to be developing resistance, most have not recovered their density or diversity. Since the fungus lingers in the environment, they remain at risk.There’s a missing piece in the researchers’ analysis, which is that there is no contemporaneous data to prove that mosquito populations surged in a way that promoted malaria. The surveys they needed—of mosquito density during and after Bd’s arrival, in the 81 counties in Costa Rica and 55 in Panama—simply don’t exist. That makes it difficult for them to determine why malaria fell off again, particularly since frog populations haven’t revived. Springborn theorizes it might be due to human intervention, like governments or organizations noticing the malaria spike and spraying insecticides or distributing bed nets. Or it might be that ecosystems recovered even though the frogs did not, with other predator species taking advantage of the emptied niche to keep mosquito counts down.But the fact that malaria rates came back down again doesn’t invalidate the findings’ importance. “For the most part, Bd has been a story of the consequences for amphibians, basically: Isn't it too bad to lose this charismatic group of organisms?” says James P. Collins, an evolutionary ecologist and professor at Arizona State University. (Collins has some connection to this research; he oversaw a grant that the National Science Foundation made to Lips in the 1990s.) “It’s been an embedded assumption that reducing the world’s biodiversity is bound to be harmful. Connecting the dots to real implications for humans is a nice piece of evidence for understanding the consequences.”It’s important to have that evidence, because a second fungal wave is coming: a related pathogen called Batrachochytrium salamandrivorans, Bsal for short, that is lethal to salamanders and newts. Like the frogs and related amphibians decimated by Bd, salamanders are crucial members of wild ecosystems—and it happens that North America harbors about 50 percent of the world’s species, making them pillars of biodiversity for US woodlands and wildlife.It’s been shown over years that Bd’s emergence was not due solely to natural forces. Instead, its spread across the world was turbocharged by international trade, as wild amphibians hitchhiked in cargo, and captured wildlife were sold legally or illegally into the enormous global market for exotic pets. International conventions on trading some species of wildlife were instituted in the 2000s in hopes of controlling the fungus, but genomic analyses published a decade later showed new strains circulating the world, indicating that fresh imports were occurring despite the bans. That’s relevant to the future of Bsal as well. To slow down that second fungus, the US Fish and Wildlife Service made it illegal in 2016 to import 201 species of salamanders. But experts have argued for years that federal resources for intercepting and inspecting even legally traded animals are inadequate.The new evidence that Bd imperiled human health, combined with years of proof that it destroyed wildlife globally, might be enough to prompt further regulations to slow Bsal’s advance while it might yet be controlled. At the least, it serves as a warning of how difficult it can be to predict pandemics ahead of time—and how hard it can be to put the brakes on once they’re underway. | Biology |
Compounds in tea and berries reduced plaques strongly linked to Alzheimer'sAlzheimer's is the most common cause of the degenerative disease dementiaThe findings apply to more than 6 million Americans living with Alzheimer's Published: 14:41 EDT, 2 November 2022 | Updated: 14:51 EDT, 2 November 2022 Green tea may stave off dementia, a study suggests.Chemicals found in the herbal drink called catechins reduced plaques strongly linked to Alzheimer's in a lab study.The compound resveratrol - found in blueberries, grapes and red wine - also had a similar effect on human brain cells.Catechins and resveratrol possess anti-inflammatory properties, which may explain their plaque-clearing abilities. Tufts University researchers reported their findings in the journal Free Radical Biology and Medicine. Alzheimer's is the most common form of dementia which affects more than six million Americans,.It is characterized by a lack of communication among neurons in the brain, resulting in loss of function and cell death. Catechins are compounds in green tea that have antioxidant-like effects that help prevent cell damage and soothe inflammation in the brain. Tufts researchers considered this and 20 other compounds for their anti-Alzheimer's properties including resveratrol, common in blueberries and grapes. What is Alzheimer's? Alzheimer's disease is a progressive, degenerative disease of the brain, in which build-up of abnormal proteins causes nerve cells to die.This disrupts the transmitters that carry messages, and causes the brain to shrink. More than 5 million people suffer from the disease in the US, where it is the 6th leading cause of death, and more than 1 million Britons have it.WHAT HAPPENS?As brain cells die, the functions they provide are lost. That includes memory, orientation and the ability to think and reason. The progress of the disease is slow and gradual. On average, patients live five to seven years after diagnosis, but some may live for ten to 15 years.EARLY SYMPTOMS:Loss of short-term memoryDisorientationBehavioral changesMood swingsDifficulties dealing with money or making a phone call LATER SYMPTOMS:Severe memory loss, forgetting close family members, familiar objects or placesBecoming anxious and frustrated over inability to make sense of the world, leading to aggressive behavior Eventually lose ability to walkMay have problems eating The majority will eventually need 24-hour care Source: Alzheimer's Association In a brain afflicted with Alzheimer's, abnormal levels of certain naturally occuring proteins clump together to form plaques that collect between neurons and disrupt cell function.But catechins and resveratrol proved effective at reducing the formation of plaques in those neural cells. And they did so with few or no side effects. Some of the other compounds tested including curcumin from turmeric, the diabetic medication Metformin, and a compound called citicoline, also prevented plaques from forming.They tested the efficacy of 21 compounds in a 3D neural tissue model made of a nonreactive silk sponge seeded with human skin cells that, through genetic reprogramming, were converted into self-renewing neural stem cells. Dr Dana Cairns, a research associate in the Tufts School of Engineering and leader of the study said: 'We got lucky that some of these showed some pretty strong efficacy. 'In the case of these compounds that passed the screening, they had virtually no plaques visible after about a week.'The research team's findings that point to the anti-plaque properties of commonly found compounds have the potential to benefit millions and build upon years of research into their therapeutic benefits. Green tea and berries are rich in flavonoids, which can reduce cell-damaging free radicals and soothe inflammation in the brain and enhance brain blood flow.The Tufts researchers' findings do not say conclusively that the neuroprotective properties of the 21 compounds studied will help beat back the progression of dementia. Some of the compounds studied, for instance, are not readily absorbed into the body or bloodstream.And some compounds were unable to permeate the blood-brain barrier, a barrier between the brain's blood vessels and the cells and other components that make up brain tissue.The purpose of the blood-brain barrier is to protect against circulating toxins or pathogens that could cause brain infections.Further study into the adaptablility of these compounds to better permeate the bloodstream and the blood-brain barrier is needed, according to Dr Cairns. But her team's findings are significant because there is currently no cure for Alzheimer's disease and treatments to slow the progression of the disease are limited. Alzheimer's is not the only cause of dementia, which in the US affects over 7 million people. Other causes include Parkinson's Disease and vascular dementia caused by conditions that block or reduce blood flow to various regions of the brain. Advertisement | Biology |
Australian researchers have found a protein in the lungs that sticks to the Covid-19 virus like velcro and immobilises it, which may explain why some people never become sick with the virus while others suffer serious illness.
The research was led by Greg Neely, a professor of functional genomics with the University of Sydney’s Charles Perkins Centre in collaboration with Dr Lipin Loo, a postdoctoral researcher and Matthew Waller, a PhD student. Their findings were published in the journal PLOS Biology on Friday.
The team used human cells in tissue culture to search the whole human genome for proteins that can bind to Sars-CoV-2, the virus which causes Covid-19.
This was done using the genetic engineering tool known as Crispr, which allowed them to turn on all genes in the human genome, then look to see which of those genes give human cells the ability to bind to the Sars-CoV-2 spike protein. The spike protein is crucial to the virus’s ability to infect human cells.
“This let us find this new receptor protein, LRRC15,” Neely said.
Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup
“We then used lungs from patients that died of Covid or other illnesses and found the serious Covid patients had tons of this LRRC15 in their lungs.”
LRRC15 is not present in humans until Sars-CoV-2 enters the body. It appears to be part of a new immune barrier that helps protect from serious Covid-19 infection while activating the body’s antiviral response.
Despite those patients who died from Covid-19 producing LRRC15, the researchers believe not enough was produced to be protective, or it was produced too late to help.
“When we look at lungs from patients that died of Covid there is much of this protein,” Neely said. “But we couldn’t look at the lungs of patients that survived Covid as lung biopsy is not something that is easy to do on live people. We predict there is more of this protein in survivors versus those that died of Covid.”
A separate study from London that examined blood samples for LRRC15 found the protein in the blood was lower in patients with severe covid compared to patients that had mild Covid, supporting this theory.
“Our data suggests that higher levels of LRRC15 would result in people having less severe disease,” Neely said.
“The fact that there’s this natural immune receptor that we didn’t know about, that’s lining our lungs and blocks and controls virus – that’s crazy interesting.”
They also found LRRC15 is also expressed in fibroblast cells, the cells that control lung fibrosis, a disease which causes damaged and scarred lung tissue. Covid-19 can lead to lung fibrosis, and the finding may have implications for long Covid.
“We can now use this new receptor to design broad-acting drugs that can block viral infection or even suppress lung fibrosis,” Neely said. There are currently no good treatments for lung fibrosis, he said.
Loo said LRRC15 “acts a bit like molecular velcro, in that it sticks to the spike of the virus and then pulls it away from the target cell types”.
Prof Stuart Turville, a virologist with the Kirby Institute at the University of New South Wales, said the finding is “a powerful example” of what happens when teams work together in Australia.
“Greg Neely’s team is brilliant at what we call functional genomics,” Turville said.
“That is the ability to wake up or turn off thousands of proteins at a time and when looking at new viruses, this is really important. Our team provided the platforms and virus for testing in this setting and these collaborations are really powerful both now and also in the future for emerging pathogens.”
And while the discovery may take years to translate into drugs that can protect against viruses and other diseases, Turville said the research adds to our understanding of innate immunity – hard-wired responses humans have that can act as soon as a virus appears.
“Understanding these pathways is important as they enable us to put the brakes on a virus, so other arms of our immune system can catch up and respond,” Turville said.
“In some cases these brakes can be so effective, that the virus may never gain momentum. Indeed this could be one of many factors that may increase the ability of people to be protected from the virus early on.” | Biology |
A video showing a large swarm of sharks swimming off the coast of Galveston, Texas, has gone viral this month – because the sheer size of the group is astounding. The video of the sharks received millions of views on TikTok, raising questions about this behavior and how common it is.
David Wells, a professor of marine biology at Texas A&M in Galveston, told CBS News this type of swarming behavior is common for some species of sharks.
"This group of sharks is most likely blacktips and spinners — it's difficult to identify for sure," he said. "And yes, for those species, it is common that you find them in large groups."
"Sometimes you'll see them in large groups behind shrimp trawlers pulling their nets because there's such a large good source there with all the fish getting disturbed and coming out of the nets from the trawlers," he said, adding that this group was likely feeding on a bunch of fish when the video was taken.
"[This type of shark] is usually sub-surface, so the fact they were up on the surface in this video was a little bit interesting, but certainly not anything rare or unique," Wells said. "You see these aggregations throughout the world. There are big aggregations commonly seen off of Florida in the wintertime."
Wells also said because the Gulf of Mexico is probably warm right now, the sharks are likely staying put in one area. But once temperatures drop in the fall and winter, they'll probably disperse a bit more as the go on the hunt for prey.
Waters off the coast of Florida hit, and were as much as for the time of year.
Wells said swimmers and boaters shouldn't be alarmed by this video, since there haven't been any recent incidents of shark attacks off the coast of Texas. But there are many are sharks in the Gulf – hammerhead sharks, mako sharks, tiger sharks and whale sharks are most commonly spotted off the coast, Wells said.
for more features. | Biology |
Scientists working in Greenland identified the oldest samples of DNA ever found on earth. By analyzing the two-million year old genetic material, they’ve revealed how northern Greenland was once a wildly different environment than the cold, polar region it is today. Project researcher Eske Willerslev joined William Brangham to discuss the discovery. Judy Woodruff: Scientists working in Greenland have identified the oldest samples of DNA ever found on Earth.By analyzing this two-million-year-old genetic material, they have revealed how Northern Greenland was once a wildly different environment than the cold, polar region it is today, one teeming with ancient wildlife and plants, including some that scientists thought had never lived so far north.William Brangham is back now to explore this with one of the researchers who made this discovery. William Brangham: For more on this remarkable discovery, I'm joined by one of the lead scientists on this project.Professor Eske Willerslev is an evolutionary geneticist and one of the early pioneers in studying ancient DNA. He's director of the Center for GeoGenetics at the University of Copenhagen's GLOBE Institute.Professor Willerslev, so good to have you. And congratulations on this research.So, you discovered this DNA in Northern Greenland. Can you just tell us a little bit about how you actually found the DNA?Eske Willerslev, University of Copenhagen: So, it's some settings, big hills of two million-year-old dirt, basically, lying in Northern Greenland.And what we did is, we were digging into this dirt, and we were drilling out some dirt core. You can't see any biological material, like bones or anything like that. It's basically dirt, but the DNA from the past has stuck to this dirt. And this is because we're shedding DNA all the time while we're alive.And so did these animals and plants also two million years ago. William Brangham: So, you're not drilling into an ancient carcass or an ancient tree. This is something that the animals or plants excreted during their lives? Eske Willerslev: That's correct. So it's coming from skin cells. It's coming from ancient feces, from urine and stuff like that.If I touch the screen like this, right, my DNA will be on the screen, so we will basically — every person are shedding DNA to the surroundings, and some of this DNA will bind to these sediment particles and survive for two million years, basically. William Brangham: What you just said there is so striking, though, because I had no idea that DNA could survive for such a long period of time. How is that possible? Eske Willerslev: Well, I was surprised about that too.So, the oldest DNA until this study was one million years. And that's basically what most people believed was — is possible. But, apparently, I mean, when it binds to these mineral particles in the soil, it basically protects the DNA, so it can survive much longer. William Brangham: So, once you have isolated the DNA and said, aha, this is ancient, ancient DNA, how do you go about then trying to figure out what it's DNA from… William Brangham: … what these organisms were? Eske Willerslev: Yes, that was a challenge too, because two million years is a long time in evolution, right?So whatever DNA we were finding are not identical to what we see today. But we can basically compare it to all known DNA sequences ever recorded from both the present, but also what people have retrieved from bones and teeth of the past, for example.And then we can basically identify these fragments, and, from these fragments, through the comparison, reconstruct, what animals and plants did they belong to? William Brangham: And tell us a little bit about what you discovered. Eske Willerslev: It's a total surprise.I mean, you have to understand that, today, this area up in Northern Greenland is what we called an arctic desert. There's almost nothing. It looks like Sahara, basically.And then what we can see, two million years ago, it was a diverse forest of all kinds of trees and also animals, like mastodon, these extinct big elephants, as well as the ancestor of reindeers. There was hares. There was lemmings. There was geese. I mean, so a very different ecosystem than what you see today. William Brangham: And I understand as well you found some traces of horseshoe crabs as well. Eske Willerslev: Yes. Yes. William Brangham: I mean, again, I'm no — I'm no paleontologist, but I don't — it seems striking to think that you're finding mastodons in some proximity to horseshoe crabs. Eske Willerslev: Yes, but this is because, if you had been there two million years ago, and you were standing at the shore with your rubber boot at the water, right, you would see basically a river — facing a river that is coming out, bringing material with it into, you can see, the bay, into the ocean.So, therefore, it's a mixture between the DNA from the terrestrial surroundings, right? You would have looked up, again, at this forest and seen the mastodon and so forth. And then you also get marine organisms, right, because the sediments fold into a marine setting. And that's why we see the horseshoe crab.And all of these animals suggest a time where it was way warmer than today, probably 11 to 12 degrees Celsius warmer than today. William Brangham: Well, walk me through the implications of that.If these species existed in that warmer world, what does — what are the implications for modern-day man? Eske Willerslev: Well, to me, there's two major implications.One is that what we see is an ecosystem with no modern analogue. There's nowhere in the world you find this ecosystem, which is a mixture between arctic organisms and temperate organisms. So, what it tells us is really that climate change, when it's getting warmer, it's actually quite unpredictable.I mean, most model, if not all models that are trying to predict how will our surroundings, our biology react to this moment probably wouldn't be able to have predicted this when you go back in time. So, you can say the plasticity of organisms are different than what we think.And the — well, this is, of course, worrisome, because if you're bad at forecasting, it means you also have — it's difficult to make a strategy how to mitigate, right, the consequences of global warming. On the other hand, I would say now we have a generic road map, right?We have a genetic — it's the building blocks of life. We have a genetic road map, where we can find out, how did these organisms back in time adapt to global warming? William Brangham: I know that you have been studying ancient DNA for much of your career, but this does seem like a genuinely striking advancement in your own work.And I wonder how that personally resonates with you. When you realized what you had and what you discovered, what is that like for you? Eske Willerslev: I mean, it's amazing, right?I mean, I — sometimes, I kind of divide our discoveries into what we call founding papers, and then you can see the papers where we just built on what we found, basically. And this is definitely one of the founding papers.I mean, it allows us to go back to — for the first time back to perform the last Ice Age, right, and to a climate which is very similar to what we are heading towards because of global warming. So it's also a very important period, because it tells us something about what we can expect to happen in the future. William Brangham: Such a tremendous discovery here.Professor Eske Willerslev of the University of Copenhagen, thank you so much for talking with us. Eske Willerslev: My pleasure. Judy Woodruff: So fascinating. I am in awe of these scientists. | Biology |
New blood test could improve diagnosis & management of concussion
Researchers have found a way to determine whether someone has suffered a concussion by measuring the blood levels of three biomarkers within six hours of the injury. The blood test could be used alongside existing tests for a more accurate diagnosis of the condition and to aid management and recovery.
Accurately diagnosing concussion, also called mild traumatic brain injury (mTBI), can be tricky because the signs and symptoms associated with it can be subtle. There’s no obvious observable injury – such as with a broken bone or dislocation – and unless there’s been a bleed in the brain, brain imaging often cannot detect the condition.
Now, researchers led by Monash University have discovered that three proteins, specific biomarkers that occur following concussion, could be used as a global blood test for the condition.
“Concussion diagnosis is notoriously challenging in many cases because clinicians rely on subjective observations of physical signs and self-reported symptoms, neither of which are specific to concussion and often exhibit subtlety and rapid evolution,” said Stuart McDonald, corresponding author of the study. “Consequently, even in the ED [emergency department], individuals can be discharged without a definitive diagnosis. Our findings showed that the panel of biomarkers we assessed performed really well even in patients that lacked the more overt signs of concussion, such as loss of consciousness or post-traumatic amnesia.”
The researchers found that blood levels of three proteins – interleukin-6 (IL-6), glial fibrillary acidic protein (GFAP), and ubiquitin C-terminal hydrolase-L1 (UCH-L1) – each reflecting different aspects of the biology of brain trauma, could be used to precisely diagnose concussion in otherwise healthy patients under 50 who presented to the emergency department within six hours of injury.
There were 118 participants enrolled in the study, 74 with mTBI and 44 uninjured controls. Concussed patients self-reported the duration of memory loss immediately post-injury and were tested for intellectual functioning, memory performance and speed. Blood was taken to test plasma levels of IL-6, GFAP and UCH-L1 at under six hours post-injury and seven days post-injury.
The researchers then used software to determine the accuracy of each of the biomarkers.
They found that GFAP levels before six hours demonstrated good accuracy in distinguishing mTBI from controls and in participants with and without loss of consciousness or post-traumatic amnesia. At seven days, GFAP provided moderate/acceptable diagnostic utility. Under six hours, UCH-L1 had moderate/acceptable accuracy for distinguishing mTBI from controls and in participants with and without loss of consciousness or post-traumatic amnesia. At seven days, the biomarker provided moderate/acceptable utility. IL-6 was shown to have excellent utility for distinguishing mTBI from controls at under six hours and in patients with and without loss of consciousness or post-traumatic amnesia. By seven days, the diagnostic utility was restricted to female participants and those with loss of consciousness/post-traumatic amnesia.
The researchers found that when IL-6, an inflammatory biomarker, was measured alongside GFAP and UCH-L1, two proteins exclusive to the brain, this combination showed “incredible sensitivity and specificity” in distinguishing patients with a concussion from those without. They say the blood test could be used in different situations to improve diagnosis and enable earlier management of the condition.
“Within the ED, we believe the test might prove useful in providing certainty in difficult-to-assess cases, especially when a patient may be unwilling or unable to communicate their symptoms,” said Biswadev Mitra, one of the study’s co-authors. “One example could be in cases of domestic violence, where the test might reveal a mild brain injury that could otherwise go unnoticed.”
And, of course, the test would be useful for managing sport-related concussions, which occur regularly, especially in contact sports.
“While at this stage it might not be feasible to conduct a test that alters decisions within a match, players with a potential or suspected concussion that are removed from play could feasibly be tested soon after the match, with a more definitive diagnosis helping with many aspects of the player’s recovery and return-to-play process,” McDonald said.
In addition to the trio of biomarkers identified here, the researchers found that another – neurofilament light (NfL) – was elevated in the blood a week after the concussion and had comparable diagnostic properties.
“Beyond the ED, measures of blood NfL may be most beneficial when individuals consult a GP multiple days after an impact, especially in situations where diagnostic certainty is crucial for making safe return-to-work or return-to-play decisions, such as in military or sports settings,” said McDonald.
The study was published in the journal Neurology.
Source: Monash University | Biology |
Perhaps the most obvious feature of a neuron is the long branch called an axon that ventures far from the cell body to connect with other neurons or muscles. If that long, thin projection ever seems like it could be vulnerable, a new MIT study shows that its structural integrity may indeed require the support of a surrounding protein called perlecan. Without that protein in Drosophila fruit flies, researchers at The Picower Institute for Learning and Memory found axonal segments can break apart during development and the connections, or synapses, that they form end up dying away.
Perlecan helps make the extracellular matrix, the proteins and other molecules that surround cells, stable and flexible so that cells can develop and function in an environment that is supportive without being rigid.
“What we found was that the extracellular matrix around nerves was being altered and essentially causing the nerves to break completely. Broken nerves eventually led to the synapses retracting,” says study senior author Troy Littleton, the Menicon Professor in MIT’s departments of Biology and Brain and Cognitive Sciences.
Humans need at least some perlecan to survive after birth. Mutations that reduce, but don’t eliminate, perlecan can cause Schwartz-Jampel syndrome, in which patients experience neuromuscular problems and skeletal abnormalities. The new study may help explain how neurons are affected in the condition, Littleton says, and also deepen scientists’ understanding of how the extracellular matrix supports axon and neural circuit development.
Ellen Guss PhD '23, who recently defended her doctoral thesis on the work, led the research published June 8 in eLife.
At first she and Littleton didn’t expect the study to yield a new discovery about the durability of developing axons. Instead, they were investigating a hypothesis that perlecan might help organize some of the protein components in synapses that fly nerves develop to connect with muscles. But when they knocked out the gene called “trol” that encodes perlecan in flies, they saw that the neurons appeared to “retract” many synapses at a late stage of larval development. Proteins on the muscle side of the synaptic connection remained, but the neuron side of the connection withered away. That suggested that perlecan had a bigger role than they first thought.
Indeed, the authors found that the perlecan wasn’t particularly enriched around synapses. Where it was pronounced was in a structure called the neural lamella, which surrounds axon bundles and acts a bit like the rubbery cladding around a TV cable to keep the structure intact. That suggested that a lack of perlecan might not be a problem at the synapse, but instead causes trouble along axons due to its absence in the extracellular matrix surrounding nerve bundles.
Littleton’s lab had developed a technique for daily imaging of fly neural development called serial intravital imaging. They applied it to watch what happened to the fly axons and synapses over a four-day span. They observed that while fly axons and synapses developed normally at first, not only synapses but also whole segments of axons faded away.
They also saw that the farther an axon segment was from the fly’s brain, the more likely it was to break apart, suggesting that the axon segments became more vulnerable the further out they extended. Looking segment by segment, they found that where axons were breaking down, synapse loss would soon follow, suggesting that axon breakage was the cause of the synapse retraction.
“The breakages were happening in a segment-wide manner,” Littleton says. “In some segments the nerves would break and in some they wouldn’t. Whenever there was a breakage event, you would see all the neuromuscular junctions (synapses) across all the muscles in that segment retract.”
When they compared the structure of the lamella in mutant versus healthy flies, they found that the lamella was thinner and defective in the mutants. Moreover, where the lamella was weakened, axons were prone to break and the microtubule structures that run the length of the axon would become misdirected, protruding outward and becoming tangled up in dramatic bundles at sites of severed axons.
In one other key finding, the team showed that perlecan’s critical role depended on its secretion from many cells, not just neurons. Blocking the protein in just one cell type or another did not cause the problems that total knockdown did, and enhancing secretion from just neurons was not enough to overcome its deficiency from other sources.
Altogether, the evidence pointed to a scenario where lack of perlecan secretion caused the neural lamella to be thin and defective, with the extracellular matrix becoming too rigid. The further from the brain nerve bundles extended, the more likely movement stresses would cause the axons to break where the lamella had broken down. The microtubule structure within the axons then became disorganized. That ultimately led to synapses downstream of those breakages dying away because the disruption of the microtubules means the cells could no longer support the synapses.
“When you don’t have that flexibility, although the extracellular matrix is still there, it becomes very rigid and tight and that basically leads to this breakage as the animal moves and pulls on those nerves over time,” Littleton says. “It argues that the extracellular matrix is functional early on and can support development, but doesn’t have the right properties to sustain some key functions over time as the animal begins to move and navigate around. The loss of flexibility becomes really critical.”
In addition to Littleton and Guss, the paper’s other authors are Yulia Akbergenova and Karen Cunningham.
Support for the study came from the National Institutes of Health. The Littleton Lab is also supported by The Picower Institute for Learning and Memory and The JPB Foundation. | Biology |
A new study has revealed that space weather that disrupts satellites and causes blackouts also impacts how birds fly.
Scientists from the University of Michigan (U-M) found migratory birds are getting lost when the sun emits electromagnetic radiation and charged particles that slam into Earth's magnetic field.
Nocturnally migratory birds - such as geese and swans, sandpipers and thrushes - use Earth's magnetic field as a natural navigation to guide them during their long seasonal migrations.
But when space weather disrupts the magnetic field, fewer birds choose to fly and those that do often end up disorientated or lost due to the disruptions to their navigation.
Researchers have long known that birds rely on the Earth’s magnetic field to navigate during migration, and vagrancy has previously been linked to the same and solar activity that can cause auroras in the night sky and disrupt the Earth's magnetic field.
The new findings were based on massive, long-term datasets demonstrating the previously unknown relationship between nocturnal bird migration and geomagnetic disturbances for the first time.
The team used a 23-year dataset of bird migration across the US's Great Plains, a major migratory corridor.
Birds choose this route because the Pains extend over more than miles down the country's center, stretching from Texas in the south to North Dakota near the Canadian border.
Communities of nocturnally migrating birds in this region primarily consist of a diverse set comprised of three-quarters (73 percent) of perching birds such as thrushes and warblers, 12 percent of shorebirds including sandpipers and plovers and a tenth (nine percent) of waterfowl such as ducks, geese and swans.
The researchers used images collected at 37 NEXRAD radar stations in the central flyway, which included 1.7 million radar scans from the fall and 1.4 million from the spring.
The researchers matched data from each radar station with a customized geomagnetic disturbance index representing the maximum hourly change from background magnetic conditions.
U-M space scientist Daniel Welling explained their study's difficulties: 'The biggest challenge was trying to distill such a large dataset - years and years of ground magnetic field observations - into a geomagnetic disturbance index for each radar site.
'There was a lot of heavy lifting in terms of assessing data quality and validating our final data product to ensure that it was appropriate for this study.'
The research team's data trove was fed into two complementary statistical models to measure the effects of magnetic disturbances on bird migration.
The models controlled for the known effects of weather, temporal variables such as time of night and geographic variables such as longitude and latitude.
The researchers discovered that fewer birds migrate during space weather disturbances.
They also found that those that do still migrate drift with the wind more frequently during geomagnetic disturbances in the Autumn instead of expending great effort to battle crosswinds.
Senior author Ben Winger, an assistant professor at the U-M Department of Ecology and Evolutionary Biology and a curator of birds at the U-M Museum of Zoology, explained: 'We found broad support that migration intensity decreases under high geomagnetic disturbance.
'Our results provide ecological context for decades of research on the mechanisms of animal magnetoreception by demonstrating community-wide impacts of space weather on migration dynamics.'
The researchers found that 'effort flying' against the wind was reduced by a quarter under cloudy skies during strong solar storms during the Autumn, suggesting a combination of obscured celestial cues and magnetic disruption may hinder the birds' navigation.
Lead author Eric Gulson-Castillo, a doctoral student in the U-M Department of Ecology and Evolutionary Biology, said: 'Our results suggest that fewer birds migrate during strong geomagnetic disturbances and that migrating birds may experience more difficulty navigating, especially under overcast conditions in autumn.
'As a result, they may spend less effort actively navigating in flight and consequently fly in greater alignment with the wind.
'Our findings highlight how animal decisions are dependent on environmental conditions - including those that we as humans cannot perceive, such as geomagnetic disturbances - and that these behaviors influence population-level patterns of animal movement.' | Biology |
A new study published in Adaptive Human Behavior and Physiology reveals what high testosterone levels do to women’s immune systems.
The study has discovered a link between female hormone levels and antibody production in response to hepatitis B immunization.
The study discovered a negative relationship between testosterone and immunological responses and a positive relationship between estradiol and immune responses.
“The relationship between steroid sexual hormones and immunological responses is an intriguing field of research. This is because there is still so much to learn,” said study author Javier I. Borráz-León of the University of Turku and the University of Chicago’s Center for Mind and Biology.
“As a result, we decided to contribute to this research by investigating the relationship between two sex hormones (i.e., testosterone and estradiol) and the development of antibodies against hepatitis B in young-healthy women, about whom even less is known than in males.”
The trial included 55 young, healthy Latvian women who were given two doses of a hepatitis B vaccine. The researchers took blood samples before the first immunization. This is one month after the first vaccination, and one month after the second vaccination. This is in order to examine hormone levels and antibody generation.
Findings from the Study on What High Testosterone Levels Do to Women’s Immune Systems
The researchers discovered that higher testosterone levels in women were linked to a lower immunological response. This is one month after the first immunization.
Previous study revealed a “possible inhibitory influence of T levels on antibody formation, or a potential suppressive effect of the immunological response on T levels,” according to the researchers. Higher estradiol levels, on the other hand, were linked to a stronger immunological response one month following the second vaccine.
“I think our findings allow us to highlight sex disparities in immune function. Also, its link with sex hormones,” Borraz-Leon added.
“Since women experience considerable hormonal variations throughout their menstrual cycle, we believe it is critical to include the individual’s sex as an important element when performing an experiment of this type as well as when interpreting the results.”
The researchers also saw a decline in testosterone levels between the first and second vaccinations. However, no significant changes in estradiol levels were identified across the three time periods.
“We think it’s quite fascinating that we were able to see variations in testosterone (but not estradiol) concentrations. This is in relation to the development of antibodies against hepatitis B in women,” Borraz-Leon added. “Why only testosterone and not estradiol? This is a question that should be addressed in future research.”
“One of the most interesting problems we want to solve is how the endocrine system and the immune system interact.” The researcher went on to say. “That is, how does hormone synthesis directly affect antibody production (and other immunological markers) and, in turn, how does antibody production govern hormone production?”
What is Antibody Production?
Antibody manufacturing, also known as immunoglobulin production, is the process by which immune system specialized cells known as B cells make antibodies in response to an antigen.
Antigens are foreign substances that enter the body and activate the immune system. When an antigen enters the body, it is identified by B cells that have a specific antigen receptor. The B cells eventually mature into plasma cells, which are specialized cells that produce vast quantities of antibodies.
Antibodies are proteins that can attach to the antigen that triggered their creation and mark the antigen for elimination by other immune system cells.
The process of producing antibodies is a key aspect of the adaptive immune response. This enables the immune system to recognize and respond to a wide range of infections.
Antibodies generated during an immune response can provide protection against subsequent infections with the same organism. Vaccines operate by increasing the creation of antibodies against specific pathogens, which can protect against infection without causing disease.
Now that we understand what antibody production is, let’s briefly look at hormone production.
Read More:
- Stay Away From People with These Dark Personality Traits, Study Warns
- Why Women With Resources Rarely Agree, Studies Find
- Stress Relief with Running Therapy has Downsides, Study Shows
What is Hormone Production?
A hormone is a chemical substance created in the body’s specialized cells or glands and released into the bloodstream. Hormones operate as messengers, connecting with numerous organs and tissues to regulate and govern a variety of physiological processes. This processes include growth and development, metabolism, reproduction, and stress response.
Hormones are essential for maintaining the balance of the body’s internal environment, known as homeostasis, and play an important part in general bodily functioning. Also, hormones are produced by a variety of organs and tissues throughout the body, including the adrenal glands, pituitary gland, thyroid gland and pancreas.
Let’s look much deeper into hormone production.
The process by which specialized cells or glands in the body manufacture and release hormones into the bloodstream to regulate various biological activities is referred to as hormone production.
Hormones are chemical messengers produced by endocrine glands such as the thyroid, pituitary, adrenal, and pancreas that help regulate a variety of physiological functions, including metabolism, growth and development, reproduction, and stress response.
A complicated feedback system involving numerous organs and hormones in the body tightly regulates hormone production.
Hormone production and release are influenced by a variety of factors, including age, gender, stress, food, and environmental influences. Any interruption in the hormone production process can result in hormonal imbalances, which can lead to a variety of health issues such as diabetes, thyroid diseases, and infertility.
How does Antibody Production Govern Hormone Production
Antibodies and hormones are two separate components of the body’s immune and endocrine systems, and their synthesis is unrelated. But, in some situations, the immune system might indirectly influence hormone production.
Some autoimmune illnesses, for example, can disrupt the endocrine system by causing the body to create antibodies that attack and damage the cells in the endocrine glands, resulting in hormonal abnormalities.
An example of this is Hashimoto’s thyroiditis. This is an autoimmune illness in which the body creates antibodies that attack the thyroid gland, leading to hypothyroidism.
Hormone abnormalities, on the other hand, can have an impact on the immune system, causing alterations in antibody production. Thyroid hormones, for example, are essential for immunological function, and changes in thyroid hormone levels can modify the body’s immune response and impact antibody formation.
In conclusion, while antibody and hormone production are not directly associated, they can influence each other indirectly through a variety of processes, including autoimmune illnesses and the impact of hormones on immunological function. | Biology |
Withering molds, root-rotting bacteria, viruses, and other plant pathogens destroy an estimated 15 to 30% of global harvests every year. Early detection can make the difference between a failed crop and a treatable one. Using an airborne science instrument developed at NASA’s Jet Propulsion Laboratory in Southern California, researchers have found that they can accurately spot the stealthy signs of a grape disease that inflicts billions of dollars in annual crop damage. The remote sensing technique could aid ground-based monitoring for this and other crops.
In a pair of new studies, researchers from JPL and Cornell University focused on a viral disease called GLRaV-3 (short for grapevine leafroll-associated virus complex 3). Primarily spread by insects, GLRaV-3 reduces yields and sours developing fruit, costing the U.S. wine and grape industry some $3 billion in damage and losses annually. It typically is detected by labor-intensive vine-by-vine scouting and expensive molecular testing.
The research team wanted to see if they could help growers identify GLRaV-3 infections early and from the air by using machine learning and NASA’s next-generation Airborne Visible/InfraRed Imaging Spectrometer (AVIRIS-NG). The instrument’s optical sensor, which records the interaction of sunlight with chemical bonds, has been used to measure and monitor hazards such as wildfires, oil spills, greenhouse gases, and air pollution associated with volcanic eruptions.
It was during a 2020 campaign to map methane leaks in California that plant pathologist Dr. Katie Gold and her team seized the opportunity to pose a different question: Could AVIRIS-NG uncover undercover crop infection in one of the state’s most important grape-producing regions?
“Like humans, sick plants may not exhibit outward symptoms right away, making early detection the greatest challenge facing growers,” said Gold, an assistant professor at Cornell University and senior author of the new studies. In the case of grapevine leafroll virus, it can take up to a year before a vine betrays the telltale signs of infection, such as discolored foliage and stunted fruit. However, on the cellular level, stress is well underway before then, changing how sunlight interacts with plant tissue.
Aerial Advantage
Mounted in the belly of a research plane, AVIRIS-NG observed roughly 11,000 acres of vineyards in Lodi, California. The region – located in the heart of California’s Central Valley – is a major producer of the state’s premium wine grapes.
The team fed the observations into computer models they developed and trained to distinguish infection. To help check the results, industry collaborators scouted more than 300 acres of the vineyards from the ground for visible viral symptoms while collecting vine samples for molecular testing.
Gold noted it was a labor-intensive process, undertaken during a California heat wave. “Without the hard work of the growers, industry collaborators, and the scouting teams, none of what we accomplished would have been possible,” she said. Similar efforts will continue under the NASA Acres Consortium, of which Gold is a lead scientist.
The researchers found that they were able to differentiate non-infected and infected vines both before and after they became symptomatic, with the best-performing models achieving 87% accuracy. Successful early detection of GLRaV-3 could help provide grape growers up to a year’s warning to intervene.
In a complementary paper, the researchers said their case study shows how emerging capabilities in air and space can support ground-based pathogen surveillance efforts. These capabilities include forthcoming missions like NASA’s Surface Biology and Geology (SBG) – part of the fleet of missions that will compose the agency’s Earth System Observatory. They said that SBG will provide data that could be used in combination with machine learning for agricultural decision-making at the global scale.
Fernando Romero Galvan, a doctoral candidate and lead author of both studies, noted that sustainable farming practices are more important than ever in the face of climate change. “I think these are exciting times for remote sensing and plant disease detection,” he said. “Scalable solutions can help growers make data-driven, sustainable crop management decisions.”
“What we did with this study targets one area of California for one disease,” said co-author Ryan Pavlick, a research technologist at JPL. “The ultimate vision that we have is being able to do this across the planet for many crop diseases and for growers all over the world.” | Biology |
Vaccine against deadly chytrid fungus primes frog microbiome for future exposure
A human or animal's microbiome—the collection of often beneficial microorganisms, including as bacteria and fungi, that live on or within a host organism—can play an important role in the host's overall immune response, but it is unclear how vaccines against harmful pathogens impact the microbiome. A new study led by researchers at Penn State found that a new vaccine against the deadly chytrid fungus in frogs can shift the composition of the microbiome, making frogs more resilient to future exposure to the fungus.
The study, published June 12 in a special issue of the journal Philosophical Transactions of the Royal Society B, suggests that the microbiome response could be an important, overlooked part of vaccine efficacy.
"The microorganisms that make up an animal's microbiome can often help defend against pathogens, for example by producing beneficial substances or by competing against the pathogens for space or nutrients," said Gui Becker, associate professor of biology at Penn State and leader of the research team. "But what happens to your microbiome when you get a vaccine, like a COVID vaccine, a flu shot, or a live-attenuated vaccine like the yellow fever vaccine? In this study, we used frogs as a model system to start exploring this question."
Frogs and other amphibians are threatened by the chytrid fungus, which has led to extinctions of some species and severe population declines in hundreds of others across several continents. In susceptible species, the fungus causes a sometimes-lethal skin disease.
"Chytrid is one of the worst, if not the worst, pathogen for wildlife conservation in recent history, and there is a critical need to develop tools to control its spread," said Becker, who is also a member of the One Health Microbiome Center and the Center for Infectious Disease Dynamics at Penn State. "We found that, in some cases, vaccines can induce a protective shift in the microbiome, which suggests that carefully manipulating the microbiome could be used as part of a broader strategy to help amphibians, and perhaps other vertebrates, deal with emerging pathogens."
The researchers applied a vaccine, in this case a non-lethal dosage of a metabolic product created by the chytrid fungus to tadpoles. After five weeks, they observed how the composition of the microbiome had changed, identifying individual species of bacteria and their relative proportions. The researchers also cultured each species of bacteria in the lab and tested whether bacteria-specific products facilitated, inhibited, or had no effect on chytrid growth, adding to and comparing results with a large database of this information.
"Increasing the concentration and duration of exposure to the chytrid product prophylaxis significantly shifted the composition of the microbiome so that there was a higher proportion of bacteria producing anti-chytrid substances," said Samantha Siomko, a master's student in the Becker Lab at the University of Alabama at the time of the research and first author of the paper. "This protective shift suggests that, if an animal were exposed to the same fungus again, its microbiome would be better capable of fighting the pathogen."
Previous attempts to induce a protective change in the microbiome have relied on adding one or multiple species of bacteria known to make potent antifungal metabolites, i.e. probiotics. However, according to the researchers, the bacteria must compete with other species in the microbiome and is not always successful at establishing itself as a permanent member of the microbiome.
"These frogs have hundreds of bacteria species on their skin that they pick up from their environment, and the composition changes regularly, including with season," said Becker. "Attempting to manipulate the community, for example by adding a bacterial probiotic, is challenging, because the dynamics in the community are so complex and unpredictable. Our results are promising because we have essentially manipulated the entire bacterial community in a direction that is more effective against fighting the fungal pathogen without adding a living thing that needs to compete for resources to survive."
Notably, the overall number of species—the diversity—within the microbiome was not impacted, only the composition and relative proportions of species. The researchers believe this is positive, as declines in the diversity of the frog microbiome can often lead to illness or death, and it is generally accepted that maintaining a diverse microbiome allows the community of bacteria and microbe species to respond to threats more dynamically and with higher functional redundancy.
The researchers suggest that this adaptive shift in the microbiome composition, which they call the "microbiome memory," could play an important role in vaccine efficacy. In addition to understanding the mechanisms behind the shift, the research team hopes to study the idea of microbiome memory in adult frogs as well as other vertebrate species in the future.
"Our collaborative team implemented a prophylaxis technique that relied on metabolic product derived from the chytrid fungus," said Becker. "It's possible that vaccines based on mRNA or live cells—like those often used to protect against bacterial or viral infections—may differently affect the microbiome, and we are excited to explore this possibility."
In addition to Becker and Siomko, the research team includes Teagan McMahon—who developed the prophylaxis method—at the University of Connecticut; Sasha Greenspan, Wesley Neely, and Stanislava Chtarbanova at the University of Alabama; Douglas Woodhams at the University of Massachusetts; and K. M. Barnett at Emory University.
More information: Selection of an anti-pathogen skin microbiome following prophylaxis treatment in an amphibian model system, Philosophical Transactions of the Royal Society B Biological Sciences (2023). DOI: 10.1098/rstb.2022.0126
Journal information: Philosophical Transactions of the Royal Society B
Provided by Pennsylvania State University | Biology |
Study paves way to more efficient production of 2G ethanol using specially modified yeast strain
A Brazilian study paves the way to increased efficiency of second-generation (2G) ethanol production based on the discovery of novel targets for metabolic engineering in a more robust strain of industrial yeast. An article on the study is published in the journal Scientific Reports.
The databases compiled by the authors are at the disposal of the scientific community in the repository of the State University of Campinas (UNICAMP), which is a member of the Dataverse Project, an international collaborative initiative.
First-generation (1G) ethanol is produced from sources rich in carbohydrates (such as sucrose), especially sugarcane in the Brazilian case. Processing of sugarcane generates large amounts of fibrous residues, such as bagasse, which can be used to produce steam and electricity in power plants. These residues are rich in cellulose and hemicellulose (polymeric carbohydrates that maintain the mechanical strength of plant stem cell walls), which can be used to produce 2G ethanol via conversion into smaller molecules for fermentation by yeast and other microorganisms.
The main challenge in 2G ethanol production is conversion efficiency since cellulose and hemicellulose are hard to hydrolyze. The first step has to be the removal of tough, stringy lignin, which is basically fiber, to make the simple sugars located in the cellulose and hemicellulose available to the yeast. This is costly, consumes a great deal of energy, and releases substances that can inhibit the fermentation process.
"2G ethanol production still requires optimization to increase efficiency. One of the approaches needed entails the identification of yeast strains that resist spoilage by inhibitory molecules derived from the processing of these residues," said Marcelo Mendes Brandão, last author of the article and a researcher at UNICAMP's Center for Molecular Biology and Genetic Engineering (CBMEG). "Some industrial yeast strains are known to have higher levels of tolerance of these compounds. A well-documented example is Saccharomyces cerevisiae SA-1, a Brazilian industrial strain for fuel ethanol that has shown high resistance to inhibitors produced by the pretreatment of cellulosic complexes. This strain was the focus of our study."
Methods
The experiments were performed by first and second authors Felipe Eduardo Ciamponi and Dielle Pierotti Procópio, both of whom were Ph.D. candidates at the time, in a collaboration involving the laboratory led by Thiago Olitta Basso, a researcher at the Department of Chemical Engineering of the University of São Paulo's Engineering School (POLI-USP), and Brandão's lab at CBMEG-UNICAMP.
"To put this study in the context of research on 2G ethanol, we knew certain strains of S. cerevisiae were resistant to these inhibitors, but the molecular mechanism they use to achieve this resistance is complex, involving multiple processes and regulatory pathways," Basso said. The study focused on p-Coumaric acid (pCA), one of the main inhibitors present in sugarcane bagasse after processing. "The data available in the literature shows that pCA inhibits biomass yield and lowers the performance of this yeast strain in 2G ethanol production."
To understand how the yeast responded to the culture medium, the researchers decided to use a multiomics-based approach combined with bioinformatics to integrate analysis of the transcriptome—the full range of messenger RNA (mRNA) molecules expressed by the organism—with quantitative physiological data. Their aim was to arrive at a molecular and physiological characterization of the yeast's response to this key inhibitor.
Procópio and Ciamponi conducted the biological experiments at POLI-USP's BioProcessing Laboratory (BELA) using continuous culturing in chemostats, a type of bioreactor in which the physiological and chemical conditions are tightly controlled, enabling them to isolate transcriptomic alterations that arose in response to the presence of pCA without interference from other variables influenced by environmental conditions.
Samples of steady-state S. cerevisiae SA-1 cultured in anaerobic chemostats with and without pCA were collected to determine physiological parameters. Part of the material was sent to Taiwan for RNA sequencing. The results, which were analyzed at CBMEG-UNICAMP's Integrative Systems Biology Laboratory, showed that the biological mechanisms used by the yeast strain to survive under the influence of this inhibitor are even more intricate than previously thought.
The quantitative physiological data suggested that the yeast tended to increase sugar and ethanol yield when exposed to pCA stress under anaerobic conditions (relevant to the industrial process).
Brazil has advanced in research on ways of leveraging its outstanding biodiversity to optimize biomass yields in the manufacturing of bioproducts—consumer goods that can be built, assembled or produced by converting part of an organism, as in the case of plant tissue and fiber, or by capturing their metabolites. "An example is production of fuel ethanol, a commodity with significant impact on the Brazilian economy," Brandão said.
More information: F. E. Ciamponi et al, Multi-omics network model reveals key genes associated with p-coumaric acid stress response in an industrial yeast strain, Scientific Reports (2022). DOI: 10.1038/s41598-022-26843-2
Journal information: Scientific Reports
Provided by FAPESP | Biology |
Researchers at the University of Cologne's CECAD Cluster of Excellence for Aging Research and the CEPLAS Cluster of Excellence for Plant Sciences have found a promising synthetic plant biology approach for the development of a therapy to treat human neurodegenerative diseases, especially Huntington's disease. In their publication "In-planta expression of human polyQ-expanded huntingtin fragment reveals mechanisms to prevent disease-related protein aggregation" in Nature Aging, they showed that a synthetic enzyme derived from plants -- stromal processing peptidase (SPP) -- reduces the clumping of proteins responsible for the pathological changes in models of Huntington's disease in human cells and the nematode Caenorhabditis elegans.
Huntington's disease is among the so called polyglutamine (polyQ) diseases, a group of neurodegenerative disorders caused by multiple repetitions of glutamine amino acids in specific proteins. An excessive number of polyQ repeats can cause proteins to aggregate or accumulate in harmful and damaging protein deposits, leading to cellular dysfunction and death. To date, nine polyQ disorders have been described in humans. They all remain incurable. Among them, Huntington's disease is an inherited condition that causes widespread deterioration in the brain and disrupts thinking, behavior, emotion and movement.
Plants are immune to harmful protein aggregation
In their recent study, Professor Dr David Vilchez (CECAD) and Dr Ernesto Llamas (CEPLAS) followed an unconventional approach to find potential drugs to treat polyQ diseases like Huntington's. Plants are constantly challenged by the environment, but they cannot move to escape from these conditions. However, plants possess a striking resilience to stress that allows them to live long. Unlike humans who suffer from proteinopathies caused by the toxic aggregation or cluster of proteins, plants do not experience these kinds of diseases. They express hundreds of proteins containing polyQ repeats, but no pathologies from these factors have been reported. To explore how plants deal with toxic protein aggregation, Dr Ernesto Llamas, first author of the study, and colleagues introduced the toxic mutant protein huntingtin in plants, which causes cell death in human neurons. In contrast to animal and human models, they found that Arabidopsis thaliana plants actively removed huntingtin protein clumps and avoid harmful effects.
By means of synthetic biology, the scientists then transferred the plants' ability to avoid aggregation into human cultivated cells and animal models of Huntington's disease. Their hope is that the use of plant proteins could lead to new therapeutic approaches for treating Huntington's disease and other neurodegenerative diseases.
"We were surprised to see plants completely healthy, even though they were genetically producing the toxic human protein. The expression of mutant huntingtin in other models of research like human cultured cells, mice and nematode worms induce detrimental effects and symptoms of disease," said David Vilchez.
Plant protein alleviates symptoms in human cells and nematodes
The next step was to discover how plants avoided the toxic aggregation of mutant huntingtin. Indeed, the scientists discovered that the chloroplasts, the plant-specific organelles that perform photosynthesis, were the reason why plants do not show toxic protein deposits. Llamas said: "Unlike humans, plants have chloroplasts, an extra cellular type of organelle that could provide an expanded molecular machinery to get rid of toxic protein aggregates."
The multidisciplinary team identified the chloroplast plant protein SPP as the reason why plants are unaffected by the problematic human protein. Producing the plant SPP in models of Huntington's disease such as human cultured cells and worms like the nematode C. elegans reduced protein clumps and symptoms of disease. "We were pleased to observe that expression of the plant SPP protein improved motility of C. elegans worms affected by huntingtin even at later aging stages where the symptoms are even worse," said Dr Hyun Ju Lee, a postdoc also involved in the study. The results thus open the door to testing SPP as a potential therapy for Huntington's disease.
Plants as models for aging research
Llamas is convinced that plant research can make a meaningful contribution to treating human diseases. "Many people don't notice that plants can persist amongst variable and extreme environmental conditions that cause protein aggregation. I believe that plant molecular mechanisms hold the key to discovering new drugs that can prevent human diseases. We usually forget that some plants can live thousands of years and should be studied as models of aging research." Dr Seda Koyuncu, another postdoc involved in the study, added: "Over the past years, we have seen several promising approaches to treating hereditary diseases like Huntington's fail. We are confident that our plant synthetic approach will lead to significant advances in the field."
The team has since acquired funding form the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung -- BMBF) through the GO-Bio initial program. "We want to bring our idea into an application. Our plan is to found a start-up to produce plant-derived therapeutic proteins and to test them as potential therapeutics to treat neurodegenerative diseases in humans," said Llamas.
The research was conducted at the University of Cologne's CECAD Cluster of Excellence in Aging Research and CEPLAS Cluster of Excellence on Plant Sciences.
Story Source:
Journal Reference:
Cite This Page: | Biology |
By John Gerritsen of RNZ
Science teachers are shocked that an advance version of the draft school science curriculum contains no mention of physics, chemistry or biology.
The so-called “fast draft” said science would be taught through four contexts - the Earth system, biodiversity, food, energy and water, and infectious diseases.
It was sent to just a few teachers for their feedback ahead of its release for consultation next month, but some were so worried by the content they leaked it to their peers.
Teachers who had seen the document told RNZ they had grave concerns about it. It was embarrassing, and would lead to “appalling” declines in student achievement, they said.
Read More
One said the focus on four specific topics was likely to leave pupils bored with science by the time they reached secondary school.
But another teacher told RNZ the document presented a “massive challenge” to teachers and the critics were overreacting.
“It’s the difference from what’s existed before and the lack of content is what’s scaring people. It’s fear of the unknown,” he said.
Association of Science Educators president Doug Walker said he was shocked when he saw a copy.
“Certainly, in its current state, I would be extremely concerned with that being our guiding document as educators in Aotearoa. The lack of physics, chemistry, Earth and space science, I was very surprised by that.”
New Zealand Institute of Physics education council chairman David Housden said physics teachers were not happy either.
“We were shocked. I think that physics and chemistry are fundamental sciences and we would expect to find a broad curriculum with elements of it from space all the way down to tiny particles.”
Institute president Joachim Brand said he was worried teenagers would finish school without learning fundamental knowledge about things like energy and matter.
He warned the draft was heavy on philosophy and light on actual science.
“There is too little science content. Science needs to be learned by actually doing it to some degree. You need to be exposed to the ideas of how maybe atoms work, how electricity works, how electric forces and if that is not specified and you’re only given these broad contexts, then I’m really worried there will be huge gaps,” he said.
Brand said if the draft went ahead, fewer students would specialise in science and universities might find themselves forced to teach basic science to new students.
Secondary Chemistry Educators New Zealand co-chairperson Murray Thompson said after he read the document he was left asking where the science was.
“The stuff in there is really interesting, but we have to teach basic science first. Where’s the physics and chemistry and why can’t we find words like force and motion and elements and particles, why aren’t those words in there?
“It’s the same mistake that they made with maths and literacy. They said ‘here’s the system, here’s the way’ and the maths was all about problem-solving and written problems and all that stuff without the basic skills,” Thompson said.
Michael Johnston from the New Zealand Initiative blew the whistle on the draft document after it was leaked to him.
He said if the curriculum did not change a lot would depend on the content of the achievement standards used to assess students for the NCEA qualification.
“It would be a very strange situation where the standards for NCEA didn’t reflect the curriculum but if they did still have those key concepts, then those key concepts would obviously be taught. The assessment system will trump the curriculum every time if there’s some kind of conflict,” he said.
Walker said schools could still teach physics and chemistry if the draft became final, but it should not be left to chance.
“The problem is that some educators would look at the document and say, ‘Okay, I can do this, this and this’ and you might plan your course around that, but then not do justice to all of these other really important areas,” he said.
HOLISTIC APPROACH
One of the curriculum writers, director of the Wilf Malcolm Institute of Educational Research at the University of Waikato Cathy Buntting, rubbished suggestions key areas physics and chemistry would not be taught.
“Absolutely not. But they will be teaching the chemistry and the physics that you need to engage with - the big issues of our time - and in order to engage with the excitement of science and the possibilities that science offers,” she said.
However, Buntting said the document was intended to encourage change.
“What we are pushing towards with the current fast draft is more of a holistic approach to how the different science concepts interact with each other rather than a purist, siloed approach.”
Buntting said the draft was very high-level, as were curriculum documents for other subjects but it was clear it needed more clarity about where teachers should expect to teach various science concepts.
The Ministry of Education said it was still finalising the draft document.
“We are currently in the process of completing the draft science content based on feedback from fast testing, as well as being guided by national and international research such as PISA (Programme for International Student Assessment).
“We will then go out for wider sector and public feedback from August to late October this year, with a full draft, and sufficient time for people to give us feedback,” it said. | Biology |
States are banning this invasive Callery pear tree and urging homeowners to cut it down
When people think of spring, they often picture flowers and trees blooming. And if you live in the U.S. Northeast, Midwest or South, you have probably seen a medium-sized tree with long branches, covered with small white blooms—the Callery pear (Pyrus calleryana).
For decades, Callery pear—which comes in many varieties, including "Bradford" pear, "Aristocrat" and "Cleveland Select"—was among the most popular trees in the U.S. for ornamental plantings. Today, however, it's widely recognized as an invasive species. Land managers and plant ecologists like me are working to eradicate it to preserve biodiversity in natural habitats.
As of 2023, it is illegal to sell, plant or grow Callery pear in Ohio. Similar bans will take effect in South Carolina and Pennsylvania in 2024. North Carolina and Missouri will give residents free native trees if they cut down Callery pear trees on their property.
How did this tree, once in high demand, become designated by the U.S. Forest Service as "Weed of the Week"? The devil is in the biological details.
A quasi-perfect tree
Botanists brought the Callery pear to the U.S. from Asia in the early 1900s. They intentionally bred the horticultural variety to enhance its ornamental qualities. In doing so, they created an arboricultural wunderkind. As The New York Times observed in 1964:
"Few trees possess every desired attribute, but the Bradford ornamental pear comes unusually to close to the ideal."
Modern varieties of Callery pear produce an explosion of white flowers in springtime, followed by deep green summer foliage that turns deep red and maroon in autumn. They also are very tolerant of urban soils, which can be highly compacted and hard for roots to penetrate. The trees grow quickly and have a rounded shape, which made them suitable for planting in rows along driveways and roadsides.
During the post-World War II suburban development boom, Callery pear trees became extremely popular in residential settings. In 2005 the Society of Municipal Arborists named the "Chanticleer" variety the urban street tree of the year. But the breeding process that created this and other varieties of Callery pear was producing unexpected results.
Cloning to produce an American original
To ensure that each Callery pear tree had bright blooms, red foliage and other desired traits, horticulturists created identical clones through a process known as grafting: creating seedlings from cuttings of trees with the desired characteristics.
This approach eliminated the messy complexity of mixing genes during sexual reproduction and ensured that when each tree matured, it would have the characteristics that homeowners desire. Every tree of a specific variety was a genetically identical clone.
Grafting also meant Callery pear trees could not make fruits. Some fruit trees, such as peaches and tart cherries, can fertilize their flowers with their own pollen. In contrast, Callery pear is self-incompatible: pollen on an individual tree cannot fertilize flowers on that tree. And since all Callery pears of a specific variety planted in a neighborhood would be identical clones, they would effectively be the same tree.
If a tree can't produce fruits, it can't disperse into natural habitats. Gardeners and landscapers thought it was perfectly safe to plant Callery pear near natural habitats, such as prairies, because the species was trapped in place by its reproductive biology. But the tree would break free from its isolation and spread seeds far and wide.
The great escape
University of Cincinnati botanist Theresa Culley and colleagues have found that as horticulturalists tinkered with Callery pears to produce new versions, they made the individuals different enough to escape the fertilization barrier. If a neighborhood had only "Bradford" pear trees, then no fruits could be produced—but once someone added an "Aristocrat" pear to their yard, then these two varieties could fertilize each other and produce fruits.
When Callery pear trees in gardens and parks started depositing seeds in nearby areas, wild populations of the trees became established. Those wild trees could pollinate one another, as well as neighborhood trees.
In today's landscape, Callery pear is astonishingly fertile. The prolific flowering that horticulturists intentionally bred into these varieties now yields tremendous crops of pears each year. Although these little pears are generally not edible by humans, birds feed on the fruit, then fly away and excrete the seeds into natural habitats. Callery pear has become one of the most problematic invasive species in the eastern United States.
A thorny problem
Like other invasives, Callery pears crowd out native species. Once Callery pear seedlings spread from habitat edges into grasslands, they have advantages that allow them to dominate the site.
In my research lab, we have found that Callery pear leafs out very early in spring and drops its leaves late in fall. This enables it to soak up more sun than native species. We also have discovered that during invasion, these trees alter the soil and release chemicals that suppress the germination of native plants.
Callery pear is highly resistant to natural disturbances. In fact, when my graduate student Meg Maloney tried to kill the trees by using prescribed fires or applying liquid nitrogen directly to stumps after cutting the trees down, her efforts failed. Instead, the trees sprouted aggressively and seemingly gained strength.
Once Callery pear has escaped into natural areas, its seedlings produce very sharp, stiff thorns that can puncture shoes or even tires. This makes the trees a menace to people working in the area, as well as to native plants. Another nuisance factor is that when Callery pears bloom, they produce a strong odor that many people find unpleasant.
Currently, directly applying herbicides is the only known control for a Callery pear invasion. But the trees are so successful at spreading that poisoning their seedlings may simply create space for other Callery pear seedlings to establish. It is unclear how habitat managers can escape a confounding ecological cycle of invasion, herbicide application and re-invasion.
Banned but not gone
In response to work by the Ohio Invasive Plants Council and other experts, Ohio has taken the extraordinary step of banning Callery pear to thwart its ecological invasion into natural habitats. But the trees are common in residential areas across the state and have established vigorous populations in natural habitats. Ecologists will be working well into the future to maintain openness and biodiversity in areas where Callery pear is invading.
In the meantime, homeowners can help. Horticulturists recommend that people who have a Callery pear on their property should remove it and replace it with something that is not an invasive species. Few trees possess every desired attribute, but many native trees have visually attractive features and will not threaten ecosystems in your region.
Provided by The Conversation | Biology |
New research indicates that attentional dysregulation is a crucial factor that connects cognitive impairments and various mental health problems in adolescents. The study, published in Scientific Reports, highlights the importance of understanding and addressing cognitive performance in the context of mental health. It suggests that targeting attentional difficulties early on may help prevent or alleviate cognitive and emotional problems in individuals with psychiatric conditions.
The motivation behind this study was to investigate the relationship between cognitive deficits and psychopathology, specifically focusing on attention dysregulation, which refers to difficulties in controlling and focusing attention, as a potential key factor that connects various mental health problems.
Earlier studies have shown that reduced cognitive performance is common across different psychiatric disorders. There is also a substantial overlap among different forms of mental health issues in population samples. However, the exact cause and effect relationship between these factors is still unclear.
“Mental health problems can have variable trajectories and largely different symptoms, which psychiatry continues to struggle to research and understand,” explained study author Clark Roberts, who recently obtained his PhD in Psychiatry/Clinical Neuroscience at the University of Cambridge.
“While hallmark symptoms in categorical mental disorders can be quite distinct, psychopathological dimensions tend to covary together – referred to as the P-Factor. Confusingly for precision psychiatry, categorical diagnosis in patient samples all appear to show general deficits in ranging cognitive batteries employed by researchers (C-factor).
“Nosology, the branch of medicine concerned with classifying diseases and mental disorders, has struggled to make sense of this, given its general aim for precision in diagnosis,” Roberts said. “While the P- and C-Factors inform us of widespread comorbidity in mental disorders, they still do little in explaining hallmark symptoms, the connections and dynamics between symptoms, or potentially influential transdiagnostic mechanisms and features.”
“Therefore, the aim of this research was to examine potentially central features among covarying psychopathological dimensions and frequently observed general cognitive deficits therein.”
To address these questions, the researchers conducted a longitudinal analysis using the Adolescent Brain Cognitive Development (ABCD) cohort, which included a large sample of 11,876 adolescents aged 9-12. The study employed an extensive cognitive task battery and measured various dimensions of psychopathology.
The cognitive outcomes were measured using age-corrected measures of fluid and crystallized intelligence, which were derived from different cognitive tests. The fluid outcome encompassed executive functions such as inhibitory control, shifting, and working memory, while the crystallized abilities were related to language and verbal tasks.
The psychopathological dimensions were assessed using the Parent-Child Behavioral Checklist (CBL), which contains distinct DSM-oriented psychiatric symptoms related to ADHD, anxiety, depression, and other dimensions. Parents rated their child’s frequency and accuracy of these symptoms during testing.
The researchers had two main hypotheses: first, that attentional dysregulation would be responsible for the general cognitive difficulties seen in people with different psychiatric disorders, and second, that it would be strongly associated with other mental health issues.
“Attention is a central function for effective learning and cognition in brains, often acting as an interface, and serving to conditionally parametrize information for agents interacting in complex environments,” Roberts told PsyPost. “My interest was to examine 1.) the degree to which attention dysregulation may influence and connect with wide-ranging mental health problems and 2.) potentially functions as a bridge between mental health problems and deficits in commonly used cognitive tasks employed by researchers.”
The findings supported these hypotheses. The study showed that the cognitive problems commonly observed in people with mental health conditions are largely influenced by their level of attentional dysregulation, especially in cases of anxiety or low mood.
Interestingly, individuals experiencing anxiety or depression without the attentional dysregulation characteristic of Attention-Deficit/Hyperactivity Disorder (ADHD) did not show significant cognitive impairments. In other words, when individuals experience anxiety or depression without these ADHD traits, their cognitive performance seems to be relatively intact and, in some cases, even better than controls. This suggests that attentional dysregulation might be the primary factor contributing to cognitive difficulties in these conditions.
“In one of the largest cohorts to date, with over 11,000 children, the relationship between low mood/anxiety and wide-ranging cognitive outcomes is relatively weak,” Roberts explained. “In fact, when controlling for attention dysregulation proxies, any predictive relationships go away. Further, individuals who experience high anxiety and low mood but concurrently score low in dimensions of attention dysregulation are not only comparable to controls, but outperform them in some cognitive tasks. Therefore, while low mood/anxiety are considered subjectively negative states, they may not necessarily be clear proxies for dysfunctional components of biology or general cognition on their own within some ranges.”
Moreover, the researchers found that attentional dysregulation might play a role in connecting various mental health problems. For example, it linked to symptoms of anxiety and depression, and it was associated with traits like impulsivity, which could influence cognitive performance.
“Dysregulated attentional control systems in the brain – with symptoms sometimes contentiously referred to as ADHD at developmental extremes – can impair normative cognitive/educational performance and generate negative ongoing social interactions,” Roberts told PsyPost. “These then interactively and straightforwardly form feedback loops with anxiety and depression (and many other mental health problems).”
“This research confirms that developmental attention problems can exacerbate wide-ranging mental health dimensions, perhaps in part by developmentally and/or causally maintaining the structure of psychopathological networks, and by producing ongoing negative feedback with the environment whether socially or cognitively. We explored several important and distinct bridging symptoms and feedback occurring between attention problems and frequently comorbid dimensions of anxiety/low mood.”
The researchers also observed that perfectionism seemed to buffer against some of the cognitive and educational deficits linked to other psychopathological traits. Perfectionism refers to having high standards and striving for excellence in one’s actions and achievements. Individuals with high levels of perfectionism may be highly motivated to perform well in tasks, and this intrinsic motivation towards better performance appeared to positively influence cognitive outcomes.
Moderate levels of trait worry, which is related to anxiety, also showed a beneficial effect on cognitive performance, forming an inverted U-shaped relationship. This means that some worry or anxiety may actually enhance cognitive functioning up to a certain point.
“Creating a more nuanced and perhaps surprising picture, perfectionism seems to be a dimension positively associated with nearly all negative mental health dimensions while also being positively associated with cognitive performance,” Roberts told PsyPost. “In fact, some of the highest scores on cognitive outcomes were adolescents who showed moderate levels of anxiety and high perfectionism. Perfectionism may actually buffer against some of the negative outcomes of other psychopathological dimensions on cognitive performance, perhaps due to how it represents some individuals’ higher regard for others’ evaluations.”
But the study, like all research, includes some caveats.
“Likely the biggest limitation is that this study relied on parent assessments of adolescent attention problems and is therefore subject to some measurement bias. Importantly, attention is a multifarious construct in psychology research which can confusingly refer to many different processes and measurements,” Roberts explained.
“In this context, attention dysregulation likely refers to inefficient control of self-motivational systems directing normative top-down goal-directed actions over time given the metrics used. Thankfully, this was an extensive sample and parents have lots of exposure to their children in many contexts.”
“Further, distinct mental health problems may occur at different developmental stages, thus it is important to confirm this work in adults,” Roberts said. “Lastly, attention dysregulation can be both a cause and effect of mental health problems and while some of these analyses suggest there is a potentially causal developmental relationship, it is not possible to understand these dynamics better without further research.”
Nevertheless, the study provides evidence that attentional dysregulation is a key factor contributing to cognitive difficulties in adolescents, especially those with anxiety or low mood. While general mental health issues might be linked to cognitive impairments, attention regulation problems appear to be an essential factor in these impairments.
“ADHD remains a contentious diagnostic classification,” Roberts added. “While some do suffer debilitating extremes, it’s likely not as binary as once thought. Preliminary evidence suggests ADHD is associated with deviations in brain development and function, but it remains unclear whether this may be an adaptive function in some contexts – such as foraging. Still, contemporary normative demands in vocational and social environments may strongly exacerbate negative features of poor attentional control – which likely everybody experiences to varying degrees.”
The study, “Impact and centrality of attention dysregulation on cognition, anxiety, and low mood in adolescents“, was authored by Clark Roberts, Barbara J. Sahakian, Shuquan Chen, Samantha N. Sallie, Clare Walker, Simon R. White, Jochen Weber, Nikolina Skandali, Trevor W. Robbins, and Graham K. Murray. | Biology |
How small differences in data analysis make huge differences in results
Over the past 20 years or so, there has been growing concern that many results published in scientific journals can't be reproduced.
To understand how different researchers might arrive at different results, we asked hundreds of ecologists and evolutionary biologists to answer two questions by analyzing given sets of data. They arrived at a huge range of answers.
Why is reproducibility a problem?
The causes of problems with reproducibility are common across science. They include an over-reliance on simplistic measures of "statistical significance" rather than nuanced evaluations, the fact journals prefer to publish "exciting" findings, and questionable research practices that make articles more exciting at the expense of transparency and increase the rate of false results in the literature.
Much of the research on reproducibility and ways it can be improved (such as "open science" initiatives) has been slow to spread between different fields of science.
Interest in these ideas has been growing among ecologists, but so far there has been little research evaluating replicability in ecology. One reason for this is the difficulty of disentangling environmental differences from the influence of researchers' choices.
One way to get at the replicability of ecological research, separate from environmental effects, is to focus on what happens after the data is collected.
Birds and siblings, grass and seedlings
We were inspired by work led by Raphael Silberzahn which asked social scientists to analyze a dataset to determine whether soccer players' skin tone predicted the number of red cards they received. The study found a wide range of results.
We emulated this approach in ecology and evolutionary biology with an open call to help us answer two research questions:
-
"To what extent is the growth of nestling blue tits (Cyanistes caeruleus) influenced by competition with siblings?"
-
"How does grass cover influence Eucalyptus spp. seedling recruitment?" ("Eucalyptus spp. seedling recruitment" means how many seedlings of trees from the genus Eucalyptus there are.)
Two hundred and forty-six ecologists and evolutionary biologists answered our call. Some worked alone and some in teams, producing 137 written descriptions of their overall answer to the research questions (alongside numeric results). These answers varied substantially for both datasets.
Looking at the effect of grass cover on the number of Eucalyptus seedlings, we had 63 responses. Eighteen described a negative effect (more grass means fewer seedlings), 31 described no effect, six teams described a positive effect (more grass means more seedlings), and eight described a mixed effect (some analyses found positive effects and some found negative effects).
For the effect of sibling competition on blue tit growth, we had 74 responses. Sixty-four teams described a negative effect (more competition means slower growth, though only 37 of these teams thought this negative effect was conclusive), five described no effect, and five described a mixed effect.
What the results mean
Perhaps unsurprisingly, we and our co-authors had a range of views on how these results should be interpreted.
We have asked three of our co-authors to comment on what struck them most.
Peter Vesk, who was the source of the Eucalyptus data, said, "Looking at the mean of all the analyses, it makes sense. Grass has essentially a negligible effect on [the number of] eucalypt tree seedlings, compared to the distance from the nearest mother tree. But the range of estimated effects is gobsmacking. It fits with my own experience that lots of small differences in the analysis workflow can add to large variation [in results]."
Simon Griffith collected the blue tit data more than 20 years ago, and it was not previously analyzed due to the complexity of decisions about the right analytical pathway. He said,
"This study demonstrates that there isn't one answer from any set of data. There are a wide range of different outcomes and understanding the underlying biology needs to account for that diversity."
Meta-researcher Fiona Fidler, who studies research itself, said, "The point of these studies isn't to scare people or to create a crisis. It is to help build our understanding of heterogeneity and what it means for the practice of science. Through metaresearch projects like this we can develop better intuitions about uncertainty and make better calibrated conclusions from our research."
What should we do about it?
In our view, the results suggest three courses of action for researchers, publishers, funders and the broader science community.
First, we should avoid treating published research as fact. A single scientific article is just one piece of evidence, existing in a broader context of limitations and biases.
The push for "novel" science means studying something that has already been investigated is discouraged, and consequently we inflate the value of individual studies. We need to take a step back and consider each article in context, rather than treating them as the final word on the matter.
Second, we should conduct more analyses per article and report all of them. If research depends on what analytic choices are made, it makes sense to present multiple analyses to build a fuller picture of the result.
And third, each study should include a description of how the results depend on data analysis decision. Research publications tend to focus on discussing the ecological implications of their findings, but they should also talk about how different analysis choices influenced the results, and what that means for interpreting the findings.
More information: Elliot Gould et al, Same data, different analysts: variation in effect sizes due to analytical decisions in ecology and evolutionary biology, BMC Biology (2023). DOI: 10.32942/X2GG62
Journal information: BMC Biology
Provided by The Conversation | Biology |
Vikas Nanda has spent more than two decades studying the intricacies of proteins, the highly complex substances present in all living organisms. The Rutgers scientist has long contemplated how the unique patterns of amino acids that compose proteins determine whether they become anything from hemoglobin to collagen, as well as the subsequent, mysterious step of self-assembly where only certain proteins clump together to form even more complex substances. So, when scientists wanted to conduct an experiment pitting a human – one with a profound, intuitive understanding of protein design and self-assembly – against the predictive capabilities of an artificially intelligent computer program, Nanda, a researcher at the Center for Advanced Biotechnology and Medicine (CABM) at Rutgers, was one of those at the top of the list. Now, the results to see who – or what – could do a better job at predicting which protein sequences would combine most successfully are out. Nanda, along with researchers at Argonne National Laboratory in Illinois and colleagues from throughout the nation, reports in Nature Chemistry that the battle was close but decisive. The competition matching Nanda and several colleagues against an artificial intelligence (AI) program has been won, ever so slightly, by the computer program. Scientists are deeply interested in protein self-assembly because they believe understanding it better could help them design a host of revolutionary products for medical and industrial uses, such as artificial human tissue for wounds and catalysts for new chemical products. “Despite our extensive expertise, the AI did as good or better on several data sets, showing the tremendous potential of machine learning to overcome human bias,” said Nanda, a professor in the Department of Biochemistry and Molecular Biology at Rutgers Robert Wood Johnson Medical School. Proteins are made of large numbers of amino acids joined end to end. The chains fold up to form three-dimensional molecules with complex shapes. The precise shape of each protein, along with the amino acids it contains, determines what it does. Some researchers, such as Nanda, engage in “protein design,” creating sequences that produce new proteins. Recently, Nanda and a team of researchers designed a synthetic protein that quickly detects VX, a dangerous nerve agent, and could pave the way for new biosensors and treatments. For reasons that are largely unknown, proteins will self-assemble with other proteins to form superstructures important in biology. Sometimes, proteins look to be following a design, such as when they self-assemble into a protective outer shell of a virus, known as a capsid. In other cases, they self-assemble when something goes wrong, forming deadly biological structures associated with diseases as varied as Alzheimer’s and sickle cell. “Understanding protein self-assembly is fundamental to making advances in many fields, including medicine and industry,” Nanda said. In the experiment, Nanda and five other colleagues were given a list of proteins and asked to predict which ones were likely to self-assemble. Their predictions were compared to those made by the computer program. The human experts, employing rules of thumb based on their observation of protein behavior in experiments, including patterns of electrical charges and degree of aversion to water, chose 11 proteins they predicted would self-assemble. The computer program, based on an advanced machine-learning system, chose nine proteins. The humans were correct for six out of the 11 proteins they chose. The computer program earned a higher percentage, with six out of the nine proteins it recommended able to self-assemble. The experiment showed that the human experts “favored” some amino acids over others, sometimes leading them to incorrect choices. Also, the computer program correctly pointed to some proteins with qualities that didn’t make them obvious choices for self-assembly, opening the door to further inquiry. The experience has made Nanda, once a doubter of machine learning for protein assembly investigations, more open to the technique. “We’re working to get a fundamental understanding of the chemical nature of interactions that lead to self-assembly, so I worried that using these programs would prevent important insights,” Nanda said. “But what I’m beginning to really understand is that machine learning is just another tool, like any other.” Other researchers on the paper included Rohit Batra, Henry Chan, Srilok Srinivasan, Harry Fry and Subramanian Sankaranarayanan, all with the Argonne National Laboratory; Troy Loeffler, SLAC National Accelerator Laboratory; Honggang Cui, Johns Hopkins University; Ivan Korendovych, Syracuse University; Liam Palmer, Northwestern University; and Lee Solomon, George Mason University. Method of Research Experimental study Subject of Research Cells Article Title Machine learning overcomes human bias in the discovery of self-assembling peptides Article Publication Date 31-Oct-2022 Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system. | Biology |
Scientists are expanding the genetic map of humanity. This week, a large team published the first wave of research from the Pangenome Project, an effort to better capture the diversity of people around the world. The findings should allow scientists to better understand the influence of genetics on our health and evolutionary journey.
This past April saw the 20th anniversary of the Human Genome Project coming to an official end. Thousands of scientists worked together to unravel and translate most of the genetic information that makes up a person. The knowledge gained from the project ushered in a new era of scientific research and has contributed to advances in genetic engineering, biology, and cancer treatment, among other things.
Even at the time, though, the project researchers knew that their work was incomplete. They had only sequenced roughly 90% of the genome (the first truly complete genome sequence was obtained in 2022). And while the genome was sourced from a composite of 11 blood donors, most of the information had come from a single donor in Buffalo, New York. In the decades since, large studies, organizations, and private companies have sequenced and analyzed the genomes of millions more people. But the bulk of genomes, especially those used to conduct research, have still largely come from white, European populations—a disparity with important implications.
Though the genomes of any two people are nearly identical, there can be gene variants that are more commonly or only found in one broad population of humans. Some variants might also affect people in one population differently than others. And simply knowing about these variants and other types of differences can provide clues as to the actual function of a particular gene. Now, a large team of scientists says it’s taken a major step closer to capturing the entire breadth of genetic diversity, via the Human Pangenome Reference Consortium.
On Wednesday, the group published a paper in Nature detailing the first draft of their human pangenome reference, as well as two other studies based on studying this genome. The team’s pangenome is derived from 47 people meant to represent ancestrally diverse populations from parts of the world including South America, Africa, and Asia. Compared to the current human genome commonly used as a reference in research, the pangenome adds 119 million base pairs of genetic material and 1,115 gene duplications.
“Having a high-quality human pangenome reference that increasingly reflects the diversity of the human population will enable scientists and healthcare professionals to better understand how genomic variants influence health and disease, and move us towards a future in which genomic medicine benefits everyone,” project collaborator Eric Green, director of the National Human Genome Research Institute, said in a press conference detailing the project this week.
The research seems to be paying off already. According to the team, using the pangenome over the current reference genome increased their ability to detect structural genetic variations by 104%. In one paper, scientists used it to help them map millions of previously uncharacterized single-nucleotide variations (the most common form of genetic variation) within long segments of repetitive DNA called segmental duplications. These duplications are thought to be crucial to our evolution since they’re only commonly found in humans and other great apes compared to other mammals. But they’ve also been especially hard to sequence and analyze, so a greater understanding of their nature is sorely needed.
The Pangenome Project is still only in its beginning stages. The team ultimately plans to cobble it together from the genomes of 350 people. And from there, they and other scientists hope to use it as a way to better reveal and study variants that affect our biology and health, including those that influence our immune response or our risk of common conditions like cardiovascular disease.
At the press conference, project researcher and a geneticist at the University of Washington Evan Eichler said, “There’s a lot more left to be discovered. And I think we now have the framework to actually do that discovery.” | Biology |
Plastic waste is clogging up our rivers and oceans and causing long-lasting environmental damage that is only just starting to come into focus. But a new approach that combines biological and chemical processes could greatly simplify the process of recycling it.
While much of the plastic we use carries symbols indicating it can be recycled, and authorities around the world make a big show about doing so, the reality is that it’s easier said than done. Most recycling processes only work on a single type of plastic, but our waste streams are made up of a complex mixture that can be difficult and expensive to separate.
On top of that, most current chemical recycling processes produce end products of significantly worse quality that can’t be recycled themselves, which means we’re still a long way from the goal of a circular economy when it comes to plastics.
But a new approach that uses a chemical process to break down mixed plastic waste into simpler chemical compounds before genetically modified bacteria convert them into a single, valuable end product could point the way to a promising new solution to our plastic crisis.
This new hybrid technique, outlined in a recent paper in Science, builds upon previous research that showed that a mixture of different kinds of plastics could be broken down and converted into an array of useful chemicals by oxidizing them with the help of a catalyst.
The problem is that the resulting assortment of chemicals requires complex separation processes to isolate and purify them, which makes the approach impractical in real terms. However, the “oxygenates” produced by this process have an attractive quality: they are much more soluble in water than the products of most chemical recycling processes.
This means it’s much easier for them to be taken up by living things, opening up the prospect of using biological processes to further refine them. Taking advantage of this, the researchers genetically engineered a species of soil bacteria to absorb this concoction of chemicals and use them to produce a single end product—a process known as “biological funneling.”
In their experiments, the group created two different strains, one capable of producing b-ketoadipate, a precursor for a variety of performance-enhanced polymers, and another that produced polyhydroxyalkanoates, a family of bioplastics used in a host of medical applications.
When they tested out their hybrid approach, the researchers found that the first oxidation step was capable of converting a mixture of polystyrene, polyethylene, and PET into benzoic acid and terephthalic acid at an efficiency of 60 percent and dicarboxylic acids at an efficiency of 20 percent after 5.5 hours.
They then recovered the metal catalyst from the mixture and fed it to their custom-made bacteria. Some of the chemicals were consumed by the bacteria to help them grow, while the rest were converted into the desired end product. Overall, they were able to convert the plastic mixture into b-ketoadipate with an efficiency of 57 percent.
While the approach devised by the researchers is just a prototype, there are already some promising avenues for scaling it up and widening its scope. While they only tested the technique on three plastics, it could easily be extended to polypropylene and polyvinyl chloride.
Continuous reactor systems already in use elsewhere could help improve oxygen delivery and continuously remove the end products so that they don’t degrade before the process is finished. What’s more, it should be possible to engineer other bacteria strains to produce a wide range of different end products.
While a full analysis of the economics of the approach still needs to be done, this kind of hybrid recycling process holds considerable promise for dealing with the complicated mixture of plastics we throw away every day. A truly circular plastic economy might not be so far off after all.
Image Credit: tanvi sharma / Unsplash
Edd Genthttp://www.eddgent.com/I am a freelance science and technology writer based in Bangalore, India. My main areas of interest are engineering, computing and biology, with a particular focus on the intersections between the three. | Biology |
You’re on the vacation of a lifetime in Kenya, traversing the savanna on safari, with the tour guide pointing out elephants to your right and lions to your left. Years later, you walk into a florist’s shop in your hometown and smell something like the flowers on the jackalberry trees that dotted the landscape. When you close your eyes, the store disappears and you’re back in the Land Rover. Inhaling deeply, you smile at the happy memory.Now let’s rewind. You’re on the vacation of a lifetime in Kenya, traversing the savanna on safari, with the tour guide pointing out elephants to your right and lions to your left. From the corner of your eye, you notice a rhino trailing the vehicle. Suddenly, it sprints toward you, and the tour guide is yelling to the driver to hit the gas. With your adrenaline spiking, you think, “This is how I am going to die.” Years later, when you walk into a florist’s shop, the sweet floral scent makes you shudder.“Your brain is essentially associating the smell with positive or negative” feelings, said Hao Li, a postdoctoral researcher at the Salk Institute for Biological Studies in California. Those feelings aren’t just linked to the memory; they are part of it: The brain assigns an emotional “valence” to information as it encodes it, locking in experiences as good or bad memories.And now we know how the brain does it. As Li and his team reported recently in Nature, the difference between memories that conjure up a smile and those that elicit a shudder is established by a small peptide molecule known as neurotensin. They found that as the brain judges new experiences in the moment, neurons adjust their release of neurotensin, and that shift sends the incoming information down different neural pathways to be encoded as either positive or negative memories.The discovery suggests that in its creation of memories, the brain may be biased toward remembering things fearfully—an evolutionary quirk that may have helped keep our ancestors cautious.The findings “give us significant insights into how we deal with conflicting emotions,” said Tomás Ryan, a neuroscientist at Trinity College Dublin who was not involved in the study. It “has really challenged my own thinking in how far we can push a molecular understanding of brain circuitry.”It also opens opportunities to probe the biological underpinnings of anxiety, addiction, and other neuropsychiatric conditions that may sometimes arise when breakdowns in the mechanism lead to “too much negative processing,” Li said. In theory, targeting the mechanism through novel drugs could be an avenue to treatment.“This is really an extraordinary study” that will have a profound impact on psychiatric concepts about fear and anxiety, said Wen Li, an associate professor at Florida State University who studies the biology of anxiety disorders and was not involved in the study.Dangerous BerriesNeuroscientists are still far from understanding exactly how our brains encode and remember memories—or forget them, for that matter. Valence assignment is nonetheless seen as an essential part of the process for forming emotionally charged memories.The ability of the brain to record environmental cues and experiences as good or bad memories is critical for survival. If eating a berry makes us very sick, we instinctively avoid that berry and anything that looks like it thereafter. If eating a berry brings delicious satisfaction, we may seek out more. “To be able to question whether to approach or to avoid a stimulus or an object, you have to know whether the thing is good or bad,” Hao Li said.The neuroscientists Kay Tye and Hao Li, a postdoctoral researcher in her laboratory at the Salk Institute for Biological Studies, identified a small peptide molecule, neurotensin, as the signal that determined whether memories were encoded as positive.
Courtesy of Salk InstituteMemories that link disparate ideas—like “berry” and “sickness” or “enjoyment”—are called associative memories, and they are often emotionally charged. They form in a tiny almond-shaped region of the brain called the amygdala. Though traditionally known as the brain’s “fear center,” the amygdala responds to pleasure and other emotions as well.One part of the amygdala, the basolateral complex, associates stimuli in the environment with positive or negative outcomes. But it was not clear how it does that until a few years ago, when a group at the Massachusetts Institute of Technology led by the neuroscientist Kay Tye discovered something remarkable happening in the basolateral amygdala of mice, which they reported in Nature in 2015 and in Neuron in 2016.Tye and her team peered into the basolateral amygdala of mice learning to associate a sound with either sugar water or a mild electric shock and found that, in each case, connections to a different group of neurons strengthened. When the researchers later played the sound for the mice, the neurons that had been strengthened by the learned reward or punishment became more active, demonstrating their involvement in the associated memory.But Tye’s team couldn’t tell what was steering the information toward the right group of neurons. What acted as the switch operator?Dopamine, a neurotransmitter known to be important in reward and punishment learning, was the obvious answer. But a 2019 study showed that although this “feel-good” molecule could encode emotion in memories, it couldn’t assign the emotion a positive or negative value.So the team began looking at the genes expressed in the two areas where positive and negative memories were forming, and the results turned their attention to neuropeptides, small multifunctional proteins that can slowly and steadily strengthen synaptic connections between neurons. They found that one set of amygdala neurons had more receptors for neurotensin than the other.This finding was encouraging because earlier work had shown that neurotensin, a meager molecule just 13 amino acids long, is involved in the processing of reward and punishment, including the fear response. Tye’s team set out to learn what would happen if they changed the amount of neurotensin in the brains of mice.Tiny Molecule With a Big PersonalityWhat followed were years of surgically and genetically manipulating mouse neurons and recording the behaviors that resulted. “By the time I finished my PhD, I had done at least 1,000 surgeries,” said Praneeth Namburi, an author on both of the papers and the leader of the 2015 one.During that time, Tye moved her growing lab across the country from MIT to the Salk Institute. Namburi stayed at MIT—he now studies how dancers and athletes represent emotions in their movements—and Hao Li joined Tye’s lab as a postdoc, picking up Namburi’s notes. The project was stalled further by the pandemic, but Hao Li kept it going by requesting essential-personnel status and basically moving into the lab, sometimes even sleeping there. “I don’t know how he stayed so motivated,” Tye said.Neurons from several regions of the brain’s thalamus extend axons into the amygdala, but researchers found that only the paraventricular nucleus region (green) dictates valence.
Courtesy of Natsuko Hitora-ImamuraSo what do these results suggest would happen if your valence-assignment system broke down—while an angry rhino was charging you, for example? “You would just only slightly care,” Tye said. Your indifference in the moment would be recorded in the memory. And if you found yourself in a similar situation later in life, your memory would not inspire you to try urgently to escape, she added.However, the likelihood that an entire brain circuit would shut down is low, said Jeffrey Tasker, a professor in the brain institute at Tulane University. It’s more probable that mutations or other problems would simply prevent the mechanism from working well, instead of reversing the valence. “I would be hard-pressed to see a situation where somebody would mistake a charging tiger as a love approach,” he said.Praneeth Namburi, a neuroscience researcher at the Massachusetts Institute of Technology, performed many of the early surgeries that helped to determine where and how the valence of memories is established.
Courtesy of Talis ReksHao Li agree and note that the brain likely has fallback mechanisms that would kick in to reinforce rewards and punishments even if the primary valence system failed. This would be an interesting question to pursue in future work, he said.One way to study defects in the valence system, Tasker noted, might be to examine the very rare people who don’t report feeling fear, even in situations routinely judged as terrifying. Various uncommon conditions and injuries can have this effect, such as Urbach-Wiethe syndrome, which can cause calcium deposits to form in the amygdala, dampening the fear response.The Brain Is a PessimistThe findings are “pretty big in terms of advancing our understanding and thinking of the fear circuit and the role of the amygdala,” Wen Li said. We are learning more about chemicals like neurotensin that are less well known than dopamine but play critical roles in the brain, she said.The work points toward the possibility that the brain is pessimistic by default, Hao Li said. The brain has to make and release neurotensin to learn about rewards; learning about punishments takes less work.Further evidence of this bias comes from the reaction of the mice when they were first put into learning situations. Before they knew whether the new associations would be positive or negative, the release of neurotensin from their thalamic neurons decreased. The researchers speculate that new stimuli are assigned a more negative valence automatically until their context is more certain and can redeem them.“You’re more responsive to negative experiences versus positive experiences,” Hao Li said. If you almost get hit by a car, you’ll probably remember that for a very long time, but if you eat something delicious, that memory is likely to fade in a few days.Ryan is more wary of extending such interpretations to humans. “We’re dealing with laboratory mice who are brought up in very, very impoverished environments and have very particular genetic backgrounds,” he said.Still, he said it would be interesting to determine in future experiments whether fear is the actual default state of the human brain—and if that varies for different species, or even for individuals with different life experiences and stress levels.The findings are also a great example of how integrated the brain is, Wen Li said: The amygdala needs the thalamus, and the thalamus likely needs signals from elsewhere. It would be interesting to know which neurons in the brain are feeding signals to the thalamus.A recent study published in Nature Communications found that a single fear memory can be encoded in more than one region of the brain. Which circuits are involved probably depends on the memory. For example, neurotensin is probably less crucial for encoding memories that don’t have much emotion attached to them, such as the “declarative” memories that form when you learn vocabulary.For Tasker, the clear-cut relationship that Tye’s study found between a single molecule, a function, and a behavior was very impressive. “It’s rare to find a one-to-one relationship between a signal and a behavior, or a circuit and a function,” Tasker said.Neuropsychiatric TargetsThe crispness of the roles of neurotensin and the thalamic neurons in assigning valence might make them ideal targets for drugs aimed at treating neuropsychiatric disorders. In theory, if you can fix the valence assignment, you might be able to treat the disease, Hao Li said.It’s not clear whether therapeutic drugs targeting neurotensin could change the valence of an already formed memory. But that’s the hope, Namburi said.Pharmacologically, this won’t be easy. “Peptides are notoriously difficult to work with,” Tasker said, because they don’t cross the blood-brain barrier that insulates the brain against foreign materials and fluctuations in blood chemistry. But it’s not impossible, and the field is very much headed toward developing targeted drugs, he said.Our understanding of how the brain assigns valence still has important gaps. It’s not clear, for example, which receptors the neurotensin is binding to in amygdala neurons to flip the valence switch. “That will bother me until it is filled,” Tye said.Too much is also still unknown about how problematic valence assignments may drive anxiety, addiction, or depression, said Hao Li, who was recently appointed as an assistant professor at Northwestern University and is planning to explore some of these questions further in his new lab. Beyond neurotensin, there are many other neuropeptides in the brain that are potential targets for interventions, Hao Li said. We just don’t know what they all do. He’s also curious to know how the brain would react to a more ambiguous situation in which it wasn’t clear whether the experience was good or bad.These questions linger in Hao Li’s brain long after he packs up and goes home for the night. Now that he knows which network of chatty cells in his brain drives the emotions he feels, he jokes with friends about his brain pumping out neurotensin or holding it back in response to every bit of good or bad news.“It’s clear that this is biology, it happens to everyone,” he said. That “makes me feel better when I’m in a bad mood.”Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. | Biology |
Marine microorganisms are crucial for ocean health. Bacteria, archaea, fungi, algae and viruses make up most of the biomass in the seas and form the base of marine food webs. They support nutrient cycling and drive crucial biogeochemical processes, including key steps in the carbon, nitrogen and silicon cycles.
But the climate crisis is putting stress on oceans through steadily rising temperatures, longer and more frequent heatwaves, acidification and changes in nutrient levels. Understanding how marine microbes are affected is key to forecasting the future state of the oceans, and mitigating the effects of the crisis on marine ecosystems as well as the human communities that rely on them for livelihoods and food.
Ocean forecasting isn’t easy. Oceans are hugely complex systems, and forecasters need to incorporate an array of changes in ocean physics (waves, currents and interactions with the atmosphere), biology (how organisms react to the environment, as well as with one another) and chemistry (different forms of essential elements and their sensitivity to oxygen or pH). These models must cover a range of scales, from national waters to expanses of open ocean. They must also be able to simulate extreme states, such as marine heatwaves, and conduct simulations over hundreds of years.
Presently, there is little confidence in, or even consensus on, predictions of how marine microbes will react to changes in the climate. Researchers in marine microbiology, physiology, biogeochemistry and modelling must join forces to better observe, understand and, ultimately, model microbial processes. Here, I outline some priority areas.
Limitations of present models
Ocean models have expanded in their scope over the years. Originally, they were built to represent physical processes, such as large-scale circulation and the transport of heat and salt. In the 1980s and 1990s, simple versions of the carbon cycle were added. Since the 2000s, scientists have accounted for the role of phytoplankton in the cycling of carbon and other nutrients, through processes such as photosynthesis, nutrient limitation (in which growth is limited by the scarcity of an essential element, such as nitrogen or iron) and predation. Phytoplankton at the base of the food chain perform roughly half of the photosynthesis that occurs on Earth. The organisms’ impact can be assessed using the oceanic concentration of the photosynthetic pigment chlorophyll, which can be determined by satellite observation.
For reasons of computational efficiency, however, only a subset of key groups was modelled. This includes bloom-forming diatoms; small phytoplankton, such as Prochlorococcus and Synechococcus, that dominate nutrient-poor regions of the ocean; cyanobacterial diazotrophs that fix nitrogen by converting inert dinitrogen (N2) into the more useful ammonia (NH3); and coccolithophorids that produce shells of calcium carbonate (CaCO3).
These groups were accounted for using simple mathematical representations of the factors regulating their biomass, such as growth and predation rates. And the groups are assumed to follow the ‘law of the minimum’, according to which the least abundant resource is what limits growth rates; the impact of fluctuations in the levels of other essential resources is not included.
In recent years, such biogeochemical models, together with observation-based estimates, have become an integral part of efforts to assess fluxes of anthropogenic carbon accumulated in the ocean. Yet it remains challenging to predict changes in crucial biological fluxes with similar confidence. For example, it is not known whether the rate at which the biomass is generated by phytoplankton will increase or decrease under various climate-change scenarios1.
Such gaps in knowledge hinder scientists’ ability to understand, manage and mitigate impacts of the climate crisis on ocean health and the marine food supply2. The poor understanding of projected changes at the base of the food chain has huge implications for modelling and forecasting at all levels, including in Earth and environmental sciences, as well as for the global economy.
Bridge biological data and ocean modelling
In the past decade, technological advances have revolutionized scientists’ ability to observe and monitor oceanic biological processes across time and space. Satellite data are being used to assess ocean ecology through changes in the oceans’ optical properties, and autonomous robots are profiling the oceans, revealing variations in nutrient and chlorophyll concentrations.
In parallel, the explosion of ‘omics’ approaches — such as genomics, transcriptomics, proteomics and metabolomics — has brought about a molecular-level understanding of the complex functioning of marine microbes. These methods can help to illuminate the role of complex networks of interacting organisms3, and estimate how open-ocean communities can respond to future environmental changes4.
Emerging work that uses ocean biogeochemical, omic and remote-sensing data has revealed shortcomings in the current generation of ocean models. Notably, these models cannot reproduce resource stress or phytoplankton growth dynamics (in terms of either trends or variability)5–7. Resources that had previously been ignored by models, such as manganese, zinc and B vitamins5, are now known to influence phytoplankton growth and physiology8.
Furthermore, the law of the minimum seems to be an oversimplification. Earlier this year, a large-scale synthesis effort found that control by more than one resource (for example, iron and nitrogen) is relatively widespread8. Various ‘co-limitation’ scenarios exist in which two or more nutrients can limit growth, either independently of each other or not. This interconnectedness is corroborated by findings derived from proteomics, which suggest that marine microbes frequently experience scarcity in multiple resources at once5,9.
Genomic techniques are now routinely deployed during research voyages and ocean surveys. Yet the data they generate remain largely unexploited by the ocean biogeochemical models used for climate-change studies, which focus instead on representing bulk biological or biogeochemical indicators (such as the concentration of nutrients or chlorophyll).
Time to remodel
Researchers should now come together to embed biological information and insight into biogeochemical models. Several issues must be addressed to reduce uncertainties in the response of ocean microbes to global change. Some pertain to the scope for organisms to adapt, the importance of functional and biological diversity and whether specific groups of organisms that perform key biogeochemical functions are affected differently. The manner in which scientists detect and attribute change in microbial and biogeochemical systems is also crucial. Finally, researchers must explore the importance of biologically mediated feedbacks on the environment (such as whether cycling of compounds that are relevant to climate or biogeochemistry is affected by microbial activity); the role and size of any feedbacks from ocean-based efforts to remove carbon dioxide; and the sustainability of vital processes such as nitrogen fixation or calcium carbonate production.
The following three approaches to this challenge are being deployed, but remain limited.
Extending biogeochemical models. Incorporating further sets of organisms10 or growth-limiting resources11 in models can allow large-scale testing of whether these extra layers of detail affect results. But the utility of this approach is limited by its sheer complexity. Even the most elaborate current model would struggle to accommodate parameters that adequately represent microbial biodiversity and take into account phenomena such as co-limitation and how species respond to a changing environment, because simulations must be carried out at the global scale and over the long term.
Exploiting statistics. Statistical techniques can forecast the changes expected for a given species or ecological group as a function of well-measured predictors (such as sea surface temperature). These approaches use environmental conditions from ocean biogeochemical models or large-scale data sets to build statistical relationships with detailed biological data on organism abundance or interaction networks, for example.
Such techniques are widely used to infer how environmental changes affect the distribution of key organisms (such as plankton communities)12, and are sensitive to the density of observations. However, important biomes can be missed in some regions owing to insufficient sampling, and statistical methods don’t account for any changes in the link between the species of interest and possible future ocean conditions. Such issues have led to opposing predictions, for instance, of how the abundance of Prochlorococcus could change over time13.
Using mechanistic metabolic modelling. Models based on microbial metabolisms revealed by genomic techniques can be coupled with environmental data from observations or ocean biogeochemical models. Such mechanistic metabolic models have the most potential in the longer term. Some have been used to explore co-limitation involving iron and manganese14 and how the cellular physiology of various strains of Prochlorococcus is connected to their large-scale distribution15. Ultimately, one could imagine a direct coupling of mechanistic metabolic models with ocean biogeochemical models to enable a dynamic two-way interaction between environmental change and ocean microbial health.
Modelling tools that embrace aspects of all three approaches are needed to address the impacts of global change on marine biological systems. For instance, a combination of mechanistic metabolic models and statistical approaches could simplify the representation of key cellular processes, which could then be parameterized for key phytoplankton groups that extend existing global ocean biogeochemical models.
Alternatively, it could be that the entire biological component of the current generation of ocean biogeochemical models will be replaced by a statistical approach. This would be informed by an underlying metabolic or genome-based model that focuses solely on how key biogeochemical fluxes (for example, nutrient cycling, oxygen production or the amount of biomass generated by phytoplankton) are related to changing environmental conditions.
More ambitiously, greater fundamental understanding and parallel developments in mathematical and ecological theory could harness growing computing power. This would facilitate the modelling of microbial molecular biology, biodiversity and biogeochemical cycles from a mechanistic standpoint in global ocean models, reducing uncertainties in forecasts by increasing realism.
Next steps
Over the next 10–20 years, funders must substantially invest in interdisciplinary science so that tools to explore global impacts on microbial ecosystems can be developed.
International collaborations will be needed. Scientists working in molecular biology, microbiology and biogeochemical oceanography are already linking worldwide efforts through a programme called BioGeoSCAPES.
An important focus is the training of a new generation of scientists who can operate across disciplines. This is under way through the formation of a cohort of early-career BioGeoSCAPES fellows.
The development of interoperable data sets based on common data and a pipeline that feeds new understanding into improved predictive models is crucial if scientists are to transition to a more coherent, joined-up international effort to better constrain the impacts of climate change. Ultimately, these efforts must be designed to feed into assessment activities overseen by groups sponsored by the United Nations, such as the Intergovernmental Panel on Climate Change and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services.
Model projections that account for marine microbial processes with better accuracy and greater confidence are crucial to climate forecasts. They can be achieved only by breaking down disciplinary silos. | Biology |
The mechanics behind woodpeckers’ signature behavior has mystified biologists for many years. New research in Current Biology, however, reveals that woodpecker heads remain rigid and hammer-like throughout the pecking process, which contradicts past theories that their heads act as soft shock-absorbers.A Hard-Headed HammerWoodpeckers drill into tree trunks for a variety of reasons. They peck to find food, hollow out their homes and communicate with one another in a behavior known as “drumming.” In the face of all this pecking, biologists have hypothesized broadly about woodpeckers’ ability to hammer their heads into the trees without hurting themselves. Over the years, they’ve settled on the theory that a spongy, malleable section of the woodpeckers’ skulls absorbed most of the force of the impact, like a shock-absorbing helmet.To test the theory, a team of researchers studied the successive impacts involved in woodpecker pecking. They measured the decelerations of the successive impacts and analyzed them through biomechanical models. Researchers determined that the birds’ heads absorbed none of the impacts, challenging the traditional shock-absorption theory.“By analyzing high-speed videos of three species of woodpecker, we found that woodpeckers do not absorb the shock of the impact,” says primary study author Sam Van Wassenbergh of Universiteit Antwerpen, Belgium, in a press release. “This myth of shock absorption in woodpeckers is now busted by our findings.”Ultimately, the team concluded that shock absorbance was not only absent but would actually impede the birds if present. Any reductions in the power of their pecking would stand in the way of their ability to drill and drum into trees.Protecting Bird BrainsMore than refuting the traditional theories, the team’s findings also formulate new questions for researchers to investigate. For instance, without the protection of shock absorption, how do woodpeckers avoid hurting their heads? Researchers found that the shape and size of the woodpeckers’ brains meant that the shocks from their pecking wouldn't reach the threshold of impact to cause concussions. Woodpecker anatomy, the team says, is all set to withstand the pecking. “The absence of shock absorption does not mean their brains are in danger during the seemingly violent impacts,” Van Wassenbergh says in a press release. “Even the strongest shocks from the over 100 pecks that were analyzed should still be safe for the woodpeckers’ brains.”The research, the team says, may reveal why woodpeckers are the size that they are. Although bigger woodpeckers with a stronger peck could drill into tree trunks more successfully, they would probably be more prone to concussions. At their current size, the researchers conclude, there’s no danger in woodpeckers continuing to peck away. | Biology |
By simulating early Earth conditions in the lab, researchers have found that without specific amino acids, ancient proteins would not have known how to evolve into everything alive on the planet today -- including plants, animals, and humans.
The findings, which detail how amino acids shaped the genetic code of ancient microorganisms, shed light on the mystery of how life began on Earth.
"You see the same amino acids in every organism, from humans to bacteria to archaea, and that's because all things on Earth are connected through this tree of life that has an origin, an organism that was the ancestor to all living things," said Stephen Fried, a Johns Hopkins chemist who co-led the research with scientists at Charles University in the Czech Republic. "We're describing the events that shaped why that ancestor got the amino acids that it did."
The findings are newly published in the Journal of the American Chemical Society.
In the lab, the researchers mimicked primordial protein synthesis of 4 billion years ago by using an alternative set of amino acids that were highly abundant before life arose on Earth.
They found ancient organic compounds integrated the amino acids best suited for protein folding into their biochemistry. In other words, life thrived on Earth not just because some amino acids were available and easy to make in ancient habitats but because some of them were especially good at helping proteins adopt specific shapes to perform crucial functions.
"Protein folding was basically allowing us to do evolution before there was even life on our planet," Fried said. "You could have evolution before you had biology, you could have natural selection for the chemicals that are useful for life even before there was DNA."
Even though the primordial Earth had hundreds of amino acids, all living things use the same 20 of these compounds. Fried calls those compounds "canonical." But science has struggled to pinpoint what's so special -- if anything -- about those 20 amino acids.
In its first billion years, Earth's atmosphere consisted of an assortment of gases like ammonia and carbon dioxide that reacted with high levels of ultraviolet radiation to concoct some of the simpler canonical amino acids. Others arrived via special delivery by meteorites, which introduced a mixed bag of ingredients that helped life on Earth complete a set of 10 "early" amino acids.
How the rest came to be is an open question that Fried's team is trying to answer with the new research, especially because those space rocks brought much more than the "modern" amino acids.
"We're trying to find out what was so special about our canonical amino acids," Fried said. "Were they selected for any particular reason?"
Scientists estimate Earth is 4.6 billion years old, and that DNA, proteins, and other molecules didn't begin to form simple organisms until 3.8 billion years ago. The new research offers new clues into the mystery of what happened during the time in between.
"To have evolution in the Darwinian sense, you need to have this whole sophisticated way of turning genetic molecules like DNA and RNA into proteins. But replicating DNA also requires proteins, so we have a chicken-and-egg problem," Fried said. "Our research shows that nature could have selected for building blocks with useful properties before Darwinian evolution."
Scientists have spotted amino acids in asteroids far from Earth, suggesting those compounds are ubiquitous in other corners of the universe. That's why Fried thinks the new research could also have implications for the possibility of finding life beyond Earth.
"The universe seems to love amino acids," Fried said. "Maybe if we found life on a different planet, it wouldn't be that different."
This research is supported by the Human Frontier Science Program grant HFSP-RGY0074/2019 and the NIH Director's New Innovator Award (DP2-GM140926).
Authors include Anneliese M. Faustino, of Johns Hopkins; Mikhail Makarov, Alma C. Sanchez Rocha, Ivan Cherepashuk, Robin Krystufek, and Klara Hlouchova, of Charles University; Volha Dzmitruk, Tatsiana Charnavets, and Michal Lebl, of the Czech Academy of Sciences; and Kosuke Fujishima, of Tokyo Institute of Technology.
Story Source:
Materials provided by Johns Hopkins University. Original written by Roberto Molar Candanosa. Note: Content may be edited for style and length.
Journal Reference:
Cite This Page: | Biology |
Engineered Bacteria Sense and Record Their Environment
Complete the form below to unlock access to ALL audio articles.
Scientists from Columbia University have engineered bacteria to record signals from the external environment in their swarming patterns. The team successfully decoded these signals using artificial intelligence (AI). The research, published in Nature Chemical Biology, provides a framework for building a “natural” environment recording system.
Bacteria create unique patterns through swarming
Many species of bacteria are motile, a characteristic that can support their survival if environmental conditions are unfavorable. Several types of bacterial motility have been classified, such as twitching, gliding, swimming and swarming. The latter is a coordinated movement of bacterial cells mediated by the flagella – a hairlike structure that can be likened to a “tail”. In response to specific environmental cues the flagella create a whip-like motion that enables bacterial cells to swarm collectively, sometimes producing patterns that are visible to the naked eye. Scientists have even been known to “accidentally” produce works of art in the lab while studying bacterial swarm patterns; Balagam et al engineered mutant strains of Myxococcus xanthus – predatory bacteria that form cooperate swarms – which resemble Van Gogh’s “Starry Night” when swarming.
Researchers in the laboratory of Dr. Tal Danino, associate professor in the department of biomedical engineering at Columbia University, had been contemplating how bacteria that naturally form patterns – such as Proteus mirabilis – could be engineered to create recording systems. Their inspiration came from other sources of patterns in nature that document useful information, such as tree rings, which visibly document a tree’s age and climate history. Could bacteria be engineered to possess a similar function? “This seemed to us to be an untapped opportunity to create a natural recording system for specific cues,” says Danino, who is also a member of Columbia’s Data Science Institute (DSI).
Engineering P. mirabilis to record its environment
The researchers’ decision to focus on P. mirabilis was twofold: its native patterns can be easily seen by the naked eye and it forms on solid agar media. Combined, these factors reduce the cost of a hypothetical natural recording system. The bacteria – which form bullseye-like colony patterns – alternate between phases of growth, where dense circles form, and phases of swarming, where the colony expands outwards.
Danino and colleagues engineered P. mirabilis by adding genetic circuits that enabled the bacteria to “write” external inputs – specifically chosen by the researchers – into a visible record. Such inputs included a change in temperature or adding sugar molecules or heavy metals to the medium. In response to these inputs, the engineered bacteria changed their swarming ability, which produced a visible change in the output – a different pattern.
Dr. Andrew Laine, Percy K. and Vida L. W. Hudson Professor of biomedical engineering and a DSI member, and Dr. Jia Guo, assistant professor of neurobiology at the Columbia University Irving Medical Center, joined the study to apply deep learning to decode the input from the pattern. Using this approach, the team could predict the sugar concentration in a sample, or how many times the temperature changed while the colony grew.
What is deep learning?
Deep learning is a type of machine learning that teaches computers to replicate the human brain and learn by example.
Deep learning – a sophisticated tool in the AI toolbox – could be harnessed to extract information from incredibly complex patterns, the researchers suggest. “Our goal is to develop this system as a low-cost detection and recording system for conditions such as pollutants and toxic compounds in the environment,” says Dr. Anjali Doshi, the study’s lead author and a recent graduate from Danino’s lab. “To our knowledge, this work is the first study where a naturally pattern-forming bacterial species has been engineered by synthetic biologists to modify its native swarming ability and function as a sensor.”
The ultimate goal of the Danino lab is to develop a device that can take the recording system out of the lab. Their next steps towards this ambition will involve engineering bacteria to detect a wider range of inputs and to move the system into probiotic bacteria. Beyond creating natural recording devices, the team also have their sights set on engineering bacterial behaviors in the human body, which could enable better control of bacteria beyond the use of antibiotics.
“This work creates an approach for building macroscale bacterial recorders, expanding the framework for engineering emergent microbial behaviors,” the researchers write in the Nature paper.
Reference: Doshi A, Shaw M, Tonea R, et al. Engineered bacterial swarm patterns as spatial records of environmental inputs. Nat Chem Bio. 2023. doi: 10.1038/s41589-023-01325-2
This article is a rework of a press release issued by the University of Columbia. Material has been edited for length and content. | Biology |
Although humans smell with two nostrils, we can only detect a given scent as a whole — a steaming cup of coffee or pungent skunk, for instance. But your brain might interpret things differently, a new study suggests.
The research, conducted with hospital patients with electrodes implanted in their brains, suggests that the smells flowing through each nostril are processed as two separate signals in the part of the brain that receives smell inputs. Notably, the signals are separated in time.
The fact that the two signals may not be integrated in the brain's smell-processing center suggests there might be some advantage to keeping them separate, the researchers theorized. The research could improve our understanding of the neuroscience of smell, which is less understood than vision and hearing. We know that the brain takes the differing data from both eyes and ears into account, for instance, and maybe a similar system exists for smell.
The researchers were curious as to how the brain makes use of these two sensory inputs from the nose, said Gulce Nazli Dikecligil, a postdoctoral researcher in the University of Pennsylvania Department of Neurology and the lead author of the study, published this month in the journal Current Biology.
For the study, the researchers attached tubes to the insides of the nostrils of 10 volunteers who previously had electrodes implanted in their brains to diagnose and treat drug-resistant epilepsy. The scientists then piped three scents — coffee, banana and eucalyptus — into each volunteer's left and right nostrils, separately, as well as both nostrils simultaneously. They also pumped in odorless air, for comparison.
They asked the patients to identify the smells and recorded the patients' brain activity, specifically in the piriform cortex, the main part of the cortex that processes smells.
Smell signals from each nostril took a different amount of time to be encoded by each side of the piriform cortex. The signals were encoded about 480 milliseconds faster to the side of the cortex that correlated with the nostril detecting the smell — so, if the patient smelled banana using the right nostril, that information would travel faster to the right side of the cortex than to the left.
The researchers observed the same effect when the scent was introduced to both nostrils; the average time between encoding for each nostril was about 500 milliseconds.
"The brain seems to be maintaining, at least at the level of cortex that we were looking at, two representations — one corresponding to each nostril," Dikecligil told Live Science.
They used machine learning to further analyze the signals and decipher which scents corresponded to what brain activity. This revealed that, although the two signals were separated in time, they resulted in very similar electrical patterns. However, there were still noticeable differences in the patterns for seven of the 10 patients, meaning there could be some differences in how the brain processes smells from each nostril.
This time difference between the signals didn't seem related to how well participants could identify a smell. They were equally accurate for smells in either nostril and slightly more accurate for smells delivered to both nostrils.
Overall, participants encoded information faster when they smelled a scent with both nostrils, although the time difference between the two nostril signals remained similar. This could be because they got twice the amount of odor or because of a computational advantage, but the researchers aren't sure.
The research isn't the first to find that the nostrils might operate individually or differently. A 1999 study published in the journal Nature found that differences in airflow could sensitize each nostril to different scents. But the recent study is the first to use data from patients with electrodes in their brains and to find the observed time delay.
Future research could investigate if humans use the smell inputs from each nostril in a similar way to the visual data from our eyes, or auditory inputs from our ears. Differences in our vision from each eye give us depth perception, for instance, and we have a similar system for hearing.
"We have two sensory organs for most sensory systems," Dikecligil said. "Maybe there's an overarching principle that guides all of them in terms of how they utilize and compare and contrast [information]."
Ever wonder why some people build muscle more easily than others or why freckles come out in the sun? Send us your questions about how the human body works to [email protected] with the subject line "Health Desk Q," and you may see your question answered on the website!
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Rebecca Sohn is a freelance science writer. She writes about a variety of science, health and environmental topics, and is particularly interested in how science impacts people's lives. She has been an intern at CalMatters and STAT, as well as a science fellow at Mashable. Rebecca, a native of the Boston area, studied English literature and minored in music at Skidmore College in Upstate New York and later studied science journalism at New York University. | Biology |
Researchers devise a better way to build aptamers
When is having trillions of choices not enough? Apparently when making aptamers.
Aptamers—short strands of DNA or RNA capable of binding to specific target receptors—can be incredibly useful for measuring metabolites and proteins in biological research, identifying disease markers, and treating disease. They behave much like antibodies, but are easier to synthesize and incorporate in biosensors, are more stable at room temperature, have a longer shelf life, and are far less likely to trigger unwanted immune responses.
While aptamers come in countless configurations, finding exactly the right one for a specific task is daunting. Researchers typically identify aptamers by sifting through massive libraries of randomly generated bits of DNA or RNA. The rare aptamer that shows an affinity for the desired target can be improved through further studies, but only to a point. Consequently, aptamers have found limited use in biomedicine.
But that may soon change. Milan Stojanovic, Ph.D., professor of medical sciences, biomedical engineering, and systems biology at Columbia University Vagelos College of Physicians and Surgeons, and his colleagues have devised a better way to build aptamers. The researchers first analyzed how individual structural units within organic compounds (functional groups or fragments) contribute to binding within 27 target-aptamer pairs. The analysis yielded new insights into how to overcome structural barriers that prevent aptamers from engaging their targets.
To demonstrate the utility of their approach, the researchers created two new aptamers with potential clinical applications. One aptamer measures blood levels of the amino acid leucine and could be applied to newborn screening for maple syrup urine disease, a metabolic disorder in which the body cannot properly convert food into energy.
Another aptamer detects blood levels of the antifungal drug voriconazole, commonly used in immunocompromised patients. High levels of voriconazole can cause serious side effects, such as brain and liver toxicity. The two new aptamer-based tests have the potential to improve upon existing blood tests for leucine and voriconazole.
The paper is published in the journal Science.
More information: Kyungae Yang et al, A functional group–guided approach to aptamers for small molecules, Science (2023). DOI: 10.1126/science.abn9859
Journal information: Science
Provided by Columbia University Irving Medical Center | Biology |
Traces of the last suppers of some of the world's earliest known animals have been discovered in their 558-million-year-old fossils.Key points:Ediacaran biota are a strange group of soft-bodied organisms that lived between 600 million and 540 million years agoAn international team has analysed organic molecules in fossils from Russia to see whether the organisms had gutsTheir analysis suggests some ancient creatures known as Kimberella digested food in a similar way to usIn the moments before the animals were entombed forever, they grazed on green algal slime on the shallow sea floor, according to an analysis of different types of molecules known as sterols, which were preserved in the fossils.The finding, published today in the journal Current Biology, sheds new light on an enigmatic group of organisms that first appeared on Earth in the Ediacaran Period around 600 million years ago."This is the first direct evidence we have for the diet of Ediacaran animals," said palaeontologist Ilya Bobrovskiy, who led the study.Most scientists had assumed these animals dined on cyanobacterial mats, said study co-author Jochen Brocks of the Australian National University."But we can see now these were actually already eating algal mats," Professor Brocks said."It's probably the difference between eating a raisin and eating a watermelon."Their nutritious diet may have helped these early animals become very large, very quickly.And while some animals absorbed nutrients through their skin, the analysis also suggested others were more advanced and digested food in much the same way as we do. Scientists analysed beautifully preserved Dickinsonia fossils discovered in Russia.(Supplied: Ilya Bobrovskiy/Australian National University)Failed experiments, weirdos and modern animals The shallow seas of the Ediacaran Period were filled with a rag-tag bunch of large, soft-bodied organisms."It's a mixture of different creatures at the very early stage of evolution, with failed experiments and weirdos, but amongst them the modern animals that would then make it into the Cambrian Explosion," Professor Brocks said.What we know about this relatively idyllic time before the rise of predatory animals with claws, shells and spikes in the Cambrian Period is imprinted in rocks such as those in South Australia, where they were first discovered. Dickinsonia fossils were first discovered in the Ediacara Hills in the Flinders Ranges in South Australia.(Supplied: Ilya Bobrovskiy)"These fossils are usually poorly preserved. They're like a death mask in metamorphosed, very rough sandstones," Professor Brocks said."All we have is casts of the surface, we don't know what they look like from the underside or inside."As a result, scientists have long debated whether these creatures were animals or strange types of plants or algae.Then in 2013, Dr Bobrovskiy from GFZ-Potsdam in Germany discovered exquisitely preserved fossils in the remote cliffs of the White Sea in Russia. Ilya Bobrovskiy (left) with a Dickinsonia fossil excavated from cliffs of the White Sea in Russia.(Supplied: Ilya Bobrovskiy)The fossils included round-shaped creatures known as Dickinsonia that settled on slime, worm-like Calyptrina and Kimberella, a primordial organism that resembled a slug with a snout.Analysis of the Dickinsonia fossils by Dr Bobrovskiy at Dr Brock's lab revealed they contained cholesterol found in animals.The team then decided they wanted to find out if any of these fossilised animals had guts.It was largely thought that Dickinsonia and Calyptrina were passive feeders that absorbed nutrients through their skin, similar to tiny, transparent placozoa and tube worms that live in our oceans today. Modern tube worms do not have guts, but absorb nutrients through their skin from hydrothermal vents.(Flickr: Oregon State University)But Kimberella was thought to be more advanced and have a gut, based on pairs of scratch marks and what could potentially be balls of poo seen around fossils."The only explanation for that was it scraping stuff up, pulling it into a mouth and digesting it," Professor Brocks said."But if you look at the fossils, even at the organically preserved ones from the White Sea, you don't see a gut; you don't see anything."An ancient animal with a modern gutAnimals usually have food in their gut when they die, so the researchers thought they might be able to work out if Kimberella had a gut by what it had been eating.Algae contains a particular cocktail of plant sterols and some fatty molecules (cholesteroids such as cholesterol and ergosteroids) found in animals, fungi and mould.On the other hand, bacteria contain another fat molecule known as hopanol, instead of cholesterol. A Kimberella fossil excavated from the cliffs of the White Sea in Russia.(Supplied: Ilya Bobrovoskiy)The analysis suggested that Kimberella gobbled up mainly algae.The structure of these molecules also indicated they'd broken down in an anoxic environment like the gut."We can see the algal sterols in the Kimberella fossil, but they decayed in the typical anaerobic gut way, very distinct from the decay pattern in the mats," Professor Brocks said.To their surprise, they could also see that the animal's gut was taking up cholesterol rather than plant sterols."Kimberella had absorbed the cholesterols for its own use and only left ergosteroids and plant steroids in the gut," Professor Brocks said."That is quite some sophistication — that's like us."The team's analysis also suggested the worm-like Calyptrina had a gut."It was probably feeding off what it was getting at the surface, like maybe microalgae floating by." But the iconic Dickinsonia was a bottom feeder that absorbed its food as it lived on its algal mat."We analysed 17 fossils and not a trace of a gut ... so that almost certainly confirmed Dickinsonia was a pretty ancient weirdo," Professor Brocks said.Debate about engimatic Ediacarans far from overPalaeontologist Jim Gehling of the South Australian Museum is a world expert in Ediacaran biota from the red Precambrian sandstone of the Flinders Ranges.Professor Gehling said the findings that Kimberella actively foraged "fits what we find in the trace fossils" in South Australia.But the debate about how these organisms functioned is far from over."This has been one of the most controversial parts of the fossil record for a long time," Professor Gehling said."Every textbook tells you that real animals began at the Cambrian and Ediacarans were a failed experiment."While he has advocated organisms such as Dickinosonia were animals, he said others were not convinced the Russian fossils could have preserved material for millions of years. The cliffs surrounding Russia's White Sea contain a lot of organic material that hasn't been compressed into hard rock layers.(Supplied: Ilya Bobrovskiy)"I'm not an organic geochemist and don't claim to have any wisdom on that, but what I do know is they do have rich, organic material trapped in that sediment [in Russia]," he said."It's not been buried by kilometres of rock like the Flinders Ranges material has been."That means that the geochemical signatures seen in the fossils could be valid.But, he said, it was still important to query the science and see if the same signatures could be found in more fossils from Russia and different parts of the world such as China and Canada."When you think everything you read in the textbooks is right, then you've got a problem because textbooks always will be behind what's happening in active research," Professor Gehling said."We are looking for people who are willing to test, and if necessary, we have to abandon ideas and accept better explanations."Posted 7 hours agoTue 22 Nov 2022 at 6:30pm, updated 3 hours agoTue 22 Nov 2022 at 11:05pm | Biology |
Steering undulatory micro-swimmers in a fluid flow through reinforcement learning
New research looks at navigation strategies for deformable micro-swimmers in a viscous fluid faced with drifts, strains, and other deformations.
A deformable micro-swimmer is a small-scale organism or artificial structure that uses sinusoidal body undulations to propel itself through a fluid environment.
The term applies to organisms like bacteria which navigate through fluids using whip-like tails called flagella, sperm cells propelling themselves through the female reproductive system, and even nematodes, tiny worms that move through water or soil with undulations.
Micro-swimmers can also describe tiny micro-robots constructed from soft-materials designed to respond to stimuli and perform tasks like drug delivery on a micro-scale.
That means the study of micro-swimmers has applications in a vast array of scientific fields, from biology to fundamental physics to nanorobotics.
In a new paper by Jérémie Bec, a researcher at CNRS and Center Inria d'Université Côte d'Azur and his colleagues attempt to find an optimal navigation policy for micro-swimmers, crucial for enhancing their performance, functionality, and versatility for applications such as targeted drug delivery. The research is published in The European Physical Journal E.
"Finding an optimal navigation policy for micro-swimmers is crucial for enhancing their performance, functionality, and versatility in the mentioned applications," Bec says. "By determining an optimal navigation policy, micro-swimmers can effectively adapt and respond to changes in the fluid environment. This enables them to navigate through obstacles, avoid hazards, and exploit flow patterns for improved locomotion.
"An optimal navigation policy ensures their ability to maneuver and explore their surroundings efficiently," Bec adds.
The researcher explains that further to this, an optimal navigation policy guarantees robust performance across different conditions and variations as they undulate through a fluid environment.
Bec says that the team was particularly intrigued by the notable level of variability in the performance of the machine-learning strategies they employed. The unexpected variability in performance granted the team valuable insights and allowed them to identify optimal strategies that surpassed their initial expectations.
"We gained a deeper understanding of the complex dynamics involved in optimizing navigation policies for micro-swimmers," Bec concludes. "These findings underscore the importance of exploring beyond conventional expectations and embracing the potential for variability and unpredictability in artificial intelligence."
More information: Zakarya El Khiyati et al, Steering undulatory micro-swimmers in a fluid flow through reinforcement learning, The European Physical Journal E (2023). DOI: 10.1140/epje/s10189-023-00293-8
Journal information: European Physical Journal E
Provided by Springer | Biology |
New database unifies the information on damage to European forests over the last 60 years
The University of Córdoba is participating in the creation of the first database that harmonizes the recording of disturbances caused by insects and diseases in forests in eight European countries by combining remote sensing, satellite images and field data
Forest damage caused by insects and diseases is increasing in many parts of the world due to climate change as reductions in plants' defense mechanisms, induced by global warming, seem to contribute to forests' increased vulnerability to the incidence of pathogens and diseases.
These disturbances jeopardize many of the beneficial effects that forests offer the world, such as carbon sequestration, the regulation of water flows, wood production and the conservation of biodiversity. Having a complete and harmonized map of what these disturbances are and have been in Europe is essential to be able to understand and anticipate future incidents, thus protecting forests and their advantages.
This unified European registry did not exist until now. An international team coordinated by the Joint Research Centre (JRC) of the European Commission, on which Rocío Hernández and José Luis Quero, researchers in the Department of Forest Engineering at the University of Cordoba, have worked, has developed a new spatially detailed database on disturbances caused by pathogens and diseases: the Database of European Forest Insect and Disease Disturbances—DEFID2.
"We worked for months on a committee of experts in which the different systems for the recording of these disturbances in the different countries and regions were presented, and a series of links were established, giving rise to this more simplified, but very robust, common database, one that greatly minimizes subjectivity, and that we tested with data from the different countries," says Rocío Hernández about the process of creating this "common language for all the scenarios of Europe's forests."
All countries can now translate their records into the common language that is the DEFID2 and make them available to the entire community via this open tool.
The database contains more than 650,000 harmonized georeferenced records, mapping insects and diseases occurring between 1963 and 2021 in European forests. The records currently cover eight different countries and were acquired through various methods, such as land surveys and remote sensing techniques.
"The important thing is that this harmonized protocol allows anyone to supply the database. This way we can expand the number of affected areas included in order to increase the power of predictive models and reduce levels of uncertainty," explains Quero.
Records in DEFID2 are described by a set of qualitative attributes, including the severity and patterns of damage, pathogens, host tree species, climate-driven triggers, silvicultural practices, and eventual health interventions. In addition, "there is a very interesting component: this is the first database that connects with remote sensing data," says Hernández. In this way, the spatial pattern of damage and the temporal pattern are united.
The database is complemented by Landsat Normalized Burn Ratio time series satellite data of the affected forest areas, an index very sensitive to abrupt changes in vegetation that allows one, through images, to see the onset, duration and magnitude of the disturbance.
In addition to taking into account spatial and temporal patterns, which facilitates remote sensing with satellite passage data at different times, there is a third important level of information: the interaction between factors.
As Quero explains, "damage by pathogens and diseases are biotic damage (internal, of living organisms), but they have an abiotic history (external factors)." That is, the information on biotic damage is cross-checked with environmental events, such as drought, wind and fires. Both factors can be detected by remote sensing and the relationship between the two is analyzed, both past and future, in order to predict whether certain environmental conditions can be a breeding ground for a new disease or the development of a pathogen.
The drying of Quercus pyrenaica, mainly oak and cork oak species; and decay in conifers, such as wild pine, black pine and the Spanish fir, are some of the cases of damage to Spanish forests that have been included thanks to studies by the UCO. This data was gathered by these researchers over more than 10 years and through several research projects that, in a pioneering way, focused on the use of remote sensing to document and analyze tree decay damage.
This common European effort in the generation of the DEFID2 database arises as a novel resource making a unique contribution to the designing of networks of experiments, improving understanding of the ecological processes behind biotic disturbances of forests, monitoring their dynamics, and improving their representation in terrestrial and climate models. Using it and continuing to feed it data will bolster its potential to protect forests and the benefits they provide.
The findings are published in the journal Global Change Biology.
More information: Giovanni Forzieri et al, The Database of European Forest Insect and Disease Disturbances: DEFID2, Global Change Biology (2023). DOI: 10.1111/gcb.16912
Journal information: Global Change Biology
Provided by University of Córdoba | Biology |
AbstractBiological invasions are a multi-stage process (i.e., transport, introduction, establishment, spread), with each stage potentially acting as a selective filter on traits associated with invasion success. Behavior (e.g., exploration, activity, boldness) plays a key role in facilitating species introductions, but whether invasion acts as a selective filter on such traits is not well known. Here we capitalize on the well-characterized introduction of an invasive lizard (Lampropholis delicata) across three independent lineages throughout the Pacific, and show that invasion shifted behavioral trait means and reduced among-individual variation—two key predictions of the selective filter hypothesis. Moreover, lizards from all three invasive ranges were also more behaviorally plastic (i.e., greater within-individual variation) than their native range counterparts. We provide support for the importance of selective filtering of behavioral traits in a widespread invasion. Given that invasive species are a leading driver of global biodiversity loss, understanding how invasion selects for specific behaviors is critical for improving predictions of the effects of alien species on invaded communities. IntroductionHumans are key drivers of global environmental change1,2. Anthropogenic activities have redistributed the world’s biota and mediated species colonization of regions beyond their native range3,4. The consequences of these biological introductions are severe. Invasive species can disrupt ecological communities5, drive population declines and species extinctions6,7, and continue to cost the global economy billions of dollars every year8,9. Yet, despite the great number of cases and the severity of their impacts, only a small fraction of species that undergo human-assisted transportation will establish and become invasive10,11. Thus, identifying the traits that are selectively favored in invasive populations, and how they mediate invasion success, is of significant environmental and economic concern12.Successful invasion is a multi-stage process, and each stage (i.e., transport, introduction, establishment, spread) represents a new challenging circumstance that species go through, and in which they can succeed or fail13. In this regard, an exciting, but untested, idea is that some introduced species may already be primed to succeed as the invasion process itself could act as a sequential selective filter promoting biological traits associated with invasion success14,15. It is well established that behavior can play a key role in mediating species invasions14,16, but whether invasion acts as a selective filter on behavioral traits is not well known17,18. Previous research has shown that behavioral traits facilitating invasive range expansions can be positively selected during invasions and mechanisms such as spatial sorting and subsequent interbreeding of highly dispersive individuals at the range-edge (i.e., Olympic Village effect) can further promote this process19,20,21,22. Moreover, the exposure to, or release from, new selective pressures (e.g., novel competitors, predators, or parasites) may also contribute to shape invasive species behavioral shifts in the latter stages of the invasion process (i.e., enemy release hypothesis)23,24. However, unlike post-establishment spatial sorting, selective filtering of behavioral traits may also occur during the initial stages of invasion (i.e., transport, establishment), resulting in substantial effects on the behavior of founder populations that persist long after establishment, even in the absence of any dispersal within the invasive range14. The impacts of selective filtering may be further exacerbated where there is a significant separation between the native and invasive ranges, leading to minimal gene flow between the two populations25. Thus, both initial selective filtering and post-establishment responses to invasive range environmental pressures represent distinct processes playing an important role in the evolution of behavioral phenotypes in invasive populations. Yet, despite the likely role of selective filtering in driving behavioral divergence during invasions, to our knowledge convincing evidence of a selective filter acting on behavioral traits during species invasions has not been shown17,18.Here, we capitalized on a well-characterized biological invader, the delicate skink (also known as the plague skink or rainbow skink; Lampropholis delicata), to investigate the role of selective filtering in driving behavioral variation across the species global invasive range. The delicate skink is a small lizard species native to eastern Australia26 that has successfully invaded three regions across the Pacific. Importantly, our previous molecular work provides a robust reconstructed invasion history for the species, making it an ideal candidate for investigating selection on behavior during the invasion process27. On the Hawaiian Islands, delicate skinks invaded via human-mediated dispersal from the Brisbane region in approximately 190527. The species was restricted to Oahu until shortly after WWII, when over an ~12-year period (1963–1975), it was detected on the other five main Hawaiian islands27. In New Zealand, delicate skinks colonized in the early 1960s via a shipment of railway sleepers from the Tenterfield region27,28. It was restricted to the Auckland region for ~15 years until it spread rapidly across the North Island, predominantly via human-mediated dispersal followed by secondary natural range expansion (Hamilton: 1978, Whangarei: 2002, Edgecumbe: 2007)27,29. The species is still actively expanding its range across both the North and South Islands of New Zealand30. The most recent invasion of the delicate skink was to Lord Howe Island, where it colonized in the 1980s as a stowaway in cargo and supplies from the Coffs Harbour region27,31,32. Its subsequent spread across the island was driven by additional introductions, and subsequent genetic admixture, from other native range source regions (Brisbane, Sydney, Tenterfield)27,31,32.The delicate skink is highly adept at human-mediated dispersal. It is the only Australian lizard species that has become invasive overseas, and it has done so on multiple occasions, with subsequent human-assisted spread within each invasive region27,29. For instance, New Zealand biosecurity records indicate that it is one of the most frequent lizard species intercepted arriving as a stowaway from Australia28 and is predominantly spread via human-mediated dispersal within New Zealand29. We have previously hypothesized that behavior may drive the proficiency of the delicate skink as a stowaway, as it is more exploratory than non-invasive congenerics33,34.To test the selective filter hypothesis, we capitalized on the invasion of the delicate skink across the Pacific. Previous research has found that invasive species are often more exploratory, active, and bolder than their native range counterparts18,35,36,37,38. Here, it is suggested that these traits may make species more likely to find their way onto transport vectors in the first place, and/or more likely to disperse to locate suitable habitat and resources once introduced into their new range14. Therefore, it is expected that invasions may act as a ‘selective filter’ on these traits, resulting in invasive populations being more exploratory, active, and bolder than native range conspecifics. However, prior studies have mainly investigated behavioral changes in a single invasive population, making it difficult to disentangle selective filtering during invasion from initial founder effects. We, therefore, systematically and repeatedly measured exploratory behavior, activity, and boldness in 520 lizards from 14 populations, across three independent invasive lineages (Hawaii, New Zealand, Lord Howe Island), as well as their Australian native range source regions (Sydney, Coffs Harbour, Tenterfield, and Brisbane; Fig. 1; Supplementary Table 1). Within this framework, while the occurrence of similar founder effects resulting from multiple independent invasions is unlikely, consistency in patterns across independent invasive lineages is expected under the selective filtering hypothesis. Thus, by investigating the occurrence of a shared behavioral pattern across the distinct skink lineages within the three invasive ranges, we aim to shed light on the key signatures of selection during the invasion process.Fig. 1: Schematic diagrams of both native (Australia) and invasive (Hawaii, Lord Howe Island, and New Zealand) delicate skink populations collected in this study.Result plots show average regional differences in exploratory behavior (i.e., time spent exploring the barrier), activity (i.e., the number of transitions between grid squares), and boldness (i.e., re-emergence latency; axis inverted) between Australia (grey; n = 167 skinks), Hawaii (blue; n = 118), Lord Howe Island (green; n = 92), and New Zealand (pink; n = 143). Boldness scores were inverted so that higher values represent bolder lizards. All behavioral scores are presented in standardized units. For each results graph, filled circles represent posterior medians, vertical error bars denote 95% credible intervals, and plot width represents probability density. Source data are provided as a Source Data file.Full size imageWe first used a broad, regional-level analysis to determine how behavior has changed after introduction into Hawaii, New Zealand, and Lord Howe Island. We predicted that invasion would have driven a shift in behavioral trait means, with introduced populations being more exploratory, active, and bolder than lizards from the native range. Further, if invasion acts as a selective filter on these traits where only the most exploratory, active, and boldest individuals are introduced, we expected that lizards in invasive populations would be more behaviorally similar to each other than their native range counterparts. That is, we predicted that there would be a reduction in among-individual variation in all three invasive lineages when compared to native range populations. In addition to the predictions of the selective filter hypothesis, previous research has suggested that behavioral plasticity may be one way in which animals cope with environmental change39. To this end, we expected that there would be an increase in behavioral plasticity in invasive populations. In line with previous research, we quantified behavioral plasticity as within-individual behavioral variation (i.e., a form of reversible behavioral plasticity) that can be due to either behavioral changes in response to variation in environmental conditions, or within-individual behavioral variation even in the same conditions (i.e., low behavioral rigidity)40,41,42. The former can be a valuable immediate response to novel conditions, and the latter can allow further adjustments to repeated exposures to the same novel conditions. Both should be valuable for individuals invading new habitats. Here, we tested for whether behavioral plasticity differs between native and invasive populations. In conjunction with the broad-scale comparisons across the Pacific (i.e., regional-level analysis), the predictions of the selective filter hypothesis are also examined at a finer spatial scale by exploiting the well-characterized range expansion of invasive delicate skinks within New Zealand (i.e., within-region analysis). We expected that the most recently established New Zealand populations would show an increase in the means of the focal behavioral traits, a reduction in among-individual variation, and an increase in behavioral plasticity relative to longer-established conspecifics.Results and discussionThe results of the present study are in agreement with the predictions of the selective filter hypothesis, suggesting that this mechanism might have played an important role in driving the behavioral shifts observed in the invasive populations of the delicate skink (Fig. 1; Supplementary Tables 6–8). In the broad, regional-level analysis, we found a substantial increase in exploratory behavior after the introduction of skinks into all three invasive regions (i.e., Hawaii, New Zealand, Lord Howe Island), relative to native Australian populations (Fig. 1; Supplementary Table 6). Similarly, invasive lizards from New Zealand were substantially bolder than their native range counterparts, and were slightly more active (even if credible intervals marginally overlapped zero) than source Australian populations (Fig. 1; Supplementary Tables 7 and 8). However, this was not the case for invasive skinks from Hawaii, which showed no substantial difference in either their activity or boldness when compared to native Australian populations (Fig. 1; Supplementary Tables 7 and 8). Moreover, while Lord Howe Island skinks were slightly less bold than native range populations (credible intervals marginally overlapped zero), there was little difference between the two regions in their activity levels (Fig. 1; Supplementary Tables 7 and 8). Interestingly, we found that skinks from all three invasive regions lacked the well-characterized activity-exploration behavioral syndrome previously found in native Australian populations43. More specifically, while Australian populations demonstrated an activity–exploration behavioral syndrome (r [95 % CI] = 0.414 [0.215, 0.607]), this was not found in skinks from invasive Hawaiian (0.275 [–0.091, 0.642]), New Zealand (0.250 [–0.177, 0.644]), or Lord Howe Island (0.078 [–0.400, 0.542]) ranges, suggesting that invasion has disrupted this behavioral syndrome. There was no evidence of correlations between any other behavioral traits from any of the native or invasive regions.Using variance partitioning, we then investigated whether invasion resulted in lower among-individual behavioral variance within invasive populations—a key prediction of the selective filter hypothesis (Table 1; Supplementary Tables 9–11). We found that skinks from all three independent invasive regions (i.e., Hawaii, New Zealand, Lord Howe Island) exhibited a substantial decrease in among-individual variation in their exploratory behavior relative to native Australian populations (Table 1; Fig. 2). Similarly, invasive New Zealand skinks demonstrated lower among-individual variation in their activity than their native range counterparts (Table 1; Fig. 2). However, there were no clear differences in among-individual variation in activity between native Australian lizards and introduced populations from either Hawaii or Lord Howe Island (Table 1; Fig. 2). Further, there were no apparent differences in among-individual variation in boldness between Australian populations and skinks from any of the three invasive regions (i.e., Hawaii, New Zealand, Lord Howe Island; Table 1; Fig. 2).Table 1 The effect size (±95% CI) of the magnitude difference in among-individual variation (ΔVA), within-individual variation (ΔVW), and repeatability (ΔR) of exploration, activity, and boldness of lizards from Australia (AUS; n = 167 skinks), Hawaii (HAW; n = 118), Lord Howe Island (LHI; n = 92), and New Zealand (NZ; n = 143)Full size tableFig. 2: Adjusted, short-term repeatability and variance estimates (among- and within-individual) for exploratory behavior (i.e., time spent exploring the barrier), activity (i.e., number of transitions between grid squares), and boldness (i.e., re-emergence latency) of lizards from native Australian populations (grey; n = 167 skinks), as well as skinks from their invasive range in Hawaii (blue; n = 118), Lord Howe Island (green; n = 92), and New Zealand (pink; n = 143).For each graph, filled circles represent the median variance/repeatability estimates extracted from linear mixed-effects models, vertical error bars denote 95% credible intervals, and plot width represents probability density. Source data are provided as a Source Data file.Full size imageTaken together, these results suggest that invasion may favor more risk-prone (i.e., exploratory, active, and bold) behavioral types by promoting shifts in behavioral trait means, and changes in behavioral variation. This was especially true for exploratory behavior, where skinks from all three independent invasive ranges (i.e., Hawaii, New Zealand, and Lord Howe Island) were more exploratory and demonstrated substantial reductions in among-individual variation compared to native range conspecifics. We contend that these findings are consistent with the selective filter hypothesis and not merely due to founder effects during the initial introduction. Specifically, under a founder effects framework, we would have expected the directionality of trait differences between native and invasive populations to be random. Instead, we found consistent patterns across all three independent invasive lineages (at least for exploratory behavior). Similarly, the decrease in among-individual variation in exploratory behavior was not found for all behavioral traits in all invasive populations, again suggesting that reduced among-individual variance may not necessarily be due to founder effects during the initial invasion. Moreover, the consistent patterns in exploratory behavior were even found in the Lord Howe Island invasive populations, where repeated introductions from multiple sources, and subsequent genetic admixture are expected to have reduced any local adaptation and/or runaway selection for invasion promoting behavioral traits (i.e., Olympic Village effect)19,20. Collectively, these results suggest that invasion itself may act as a selective filter promoting risk-prone behavioral types.However, at which stage of the invasion this selection operates at is unknown. For example, this selective filter may occur during the uptake and transportation stage, where only the most exploratory individuals find their way onto transport vectors (i.e., shipping, air, etc.) in the first place14. Indeed, previous research has suggested that the increased exploratory tendencies of delicate skinks may make them more likely to become ensnared in transport vectors and moved to regions beyond their native range33. These findings, together with biosecurity records showing that most lizards are transported as single individuals28, suggests that pre-establishment selection for increased exploratory behavior may occur during the initial stages of invasion. Whether such traits are still adaptive long after establishment when populations eventually become subject to predator-induced and density-dependent natural selection is not clear44,45. Future research measuring the behavior of lizards intercepted during initial transit and after establishment will be needed to investigate at which stage the selective filtering of behavioral types may occur, and how selection may act on these behavioral types once populations become well-established.We found similar results from the fine-scale, in-depth analysis of population differences within the invasive New Zealand range (Supplementary Tables 15–17). Specifically, all invasive New Zealand populations were more active than their native range source, with the most recently established populations (e.g. Whangarei, Edgecumbe) towards the edge of their range showing the greatest difference (Fig. 3; Supplementary Tables 16). These results suggest that the sequential selective filtering of populations during human-mediated dispersal within the invasive New Zealand range29 may select for increasingly active skinks in the most recently established populations (but see subsequent discussion on the potential role of spatial sorting). Furthermore, most invasive New Zealand populations were bolder and more exploratory than their native range source (Fig. 3; Supplementary Tables 15 and 17). However, unlike for activity, there was no clear evidence that this increased in the more recently established populations. Additionally, there was a general trend towards lower among-individual variation in activity in more recently established populations (Supplementary Fig. 1; Supplementary Table 19). In contrast, we found no clear differences when comparing each invasive New Zealand population to the Australian source in among-individual variation in both exploratory behavior and boldness (Supplementary Fig. 1; Supplementary Tables 18 and 20).Fig. 3: Differences in average-level exploratory behavior (i.e., time spent exploring the barrier), activity (i.e., number of transitions between grid squares), and boldness (i.e., re-emergence latency; axis inverted) between the native Tenterfield source population (grey; n = 30 skinks), and invasive New Zealand skinks from Auckland (red; n = 31), Hamilton (orange; n = 43), Whangarei (light-orange; n = 33), and Edgecumbe (yellow; n = 36).Boldness scores were inverted so that higher values represent bolder lizards. All behavioral scores are presented in standardized units. For each graph, filled circles represent posterior medians, vertical error bars denote 95% credible intervals, and plot width represents probability density. Source data are provided as a Source Data file.Full size imageAlthough our findings are concordant with predictions of the selective filter hypothesis, it is important to note that post-establishment processes may also contribute to an increase in risk-prone behavioral phenotypes in invasive populations. For instance, post-establishment spread dynamics could also facilitate behavioral change in invasive populations. For example, previous research investigating the spread of introduced cane toads (Rhinella marina) across northern Australia found that spatial sorting of phenotypes facilitates the evolution of increasingly exploratory, active, and bold individuals at the range edge of the invasion21,46,47. Indeed, spatial sorting and subsequent interbreeding of risk-prone individuals (i.e., Olympic Village effect) may also be responsible for the current results, whereby the most recently established New Zealand populations towards the edge of their range were the most active.However, we think it unlikely that spatial sorting is largely responsible for the observed shift in the behavior of delicate skinks across their invasive range. Firstly, within the New Zealand invasion, the increase in risk-prone behavioral types (at least for exploratory behavior and activity) was seen even in the initial founding Auckland population (established ~ 1960’s), suggesting a potential role for pre-establishment selective filtering in the observed behavioral shift. Secondly, our previous research has shown that invasive delicate skinks in New Zealand often spread via human-mediated jump dispersal between disjunct locations, rather than solely via natural range expansion27,29. This is similar to Hawaii, where delicate skinks have spread between islands via human-mediated dispersal27. Finally, delicate skinks now have a nearly continuous distribution across Lord Howe Island27. However, despite the island’s small size (~11 km long, ~2 km wide; Fig. 1), there is a clear spatial structuring of haplotypes from different native source regions across the island27. This suggests that the colonization of Lord Howe Island was driven by multiple, separate introductions from various Australian source populations, rather than via natural range expansion from an initial founder population27. Together, these results highlight that spatial sorting during natural range expansion is unlikely to be largely responsible for the observed behavioral shifts in invasive delicate skinks.Similarly, reduced predation pressure and/or competition in the invasive range may also promote risk-prone behavioral phenotypes (i.e., enemy release hypothesis)23. Indeed, enemy release within the invasive range may reduce the fitness costs of risky behavioral strategies, resulting in introduced populations being more risk-prone than their native counterparts. While we cannot rule out the possibility that enemy release may have facilitated behavioral shifts in the invasive populations, we again believe that this process is unlikely to be solely responsible for the current pattern of results. For example, Hawaii, Lord Howe Island, and New Zealand all have abundant avifauna that likely predate upon invasive delicate skinks. Indeed, there are substantial populations of invasive Australian magpies (Gymnorhina tibicen) in New Zealand48, which are known to predate on small skinks in their native range49. Further, during fieldwork on Lord Howe Island, we often observed invasive delicate skinks being predated upon by native Lord Howe Island currawongs (Strepera graculina crissalis; A.C. Naimo, pers. obs.), suggesting that delicate skinks are a likely target for predation in their invasive range. Thus, while both spatial sorting and enemy release likely play a part in driving the current pattern of results (and are potentially responsible for observed behavioral differences between the separate invasive regions where the intensity of these processes may differ), we contend that it is unlikely that these processes are predominantly responsible for the consistent behavioral shifts found across multiple, independent invasive lineages of the delicate skink. Together, the present results suggest that pre-establishment selective filtering for risk-prone behavioral types is a likely mechanism in driving the behavioral divergence of invasive delicate skinks. Understanding the probable interplay between both pre- and post-establishment processes in facilitating behavioral change in introduced populations will be an important topic for future research.We also found evidence for increased within-individual variation (i.e., behavioral plasticity) in the introduced populations (Fig. 2; Table 1; Supplementary Tables 9–11). Indeed, lizards from all three invasive ranges (i.e., Hawaii, New Zealand, and Lord Howe Island) were more behaviorally plastic in their exploratory behavior, activity, and boldness than native Australian populations (Table 1; Fig. 2). This increased within-individual behavioral variation in the introduced populations resulted in a decrease in behavioral repeatability in the invasive range compared to native populations—a finding that was generally consistent across all three invasion pathways (Table 1; Fig. 2). This pattern was also found when investigating fine-scale population differences in behavioral plasticity within the New Zealand range (Supplementary Tables 18–21). Indeed, even if differences between New Zealand populations were less marked and there was substantial uncertainty around the estimates, we found that New Zealand populations were generally more behaviorally plastic in their boldness and exploratory behavior than their native range source region, and this difference (at least for exploratory behavior) seemed to be greatest in more recently established populations (Supplementary Tables 18 and 20–21; Supplementary Fig. 1). Similarly, there was also a trend towards increased within-individual variation in activity in invasive New Zealand populations compared to their native source (Supplementary Tables 19 and 21). Again, the observed increase in within-individual variance of skinks from New Zealand was accompanied by a general reduction in the repeatability of exploratory behavior, boldness, and activity compared to their native Australian source population (Supplementary Tables 18–21; Supplementary Fig. 1).These findings clearly demonstrate that invasion promotes increased within-individual behavioral variation (i.e., behavioral plasticity). While the magnitude of the effect varied amongst populations, the pattern was true for all behavioral traits in all three independent invasion lineages. This suggests that increased behavioral plasticity may be one way in which organisms cope with changing environmental conditions during biological invasions39,50, resulting in invasive populations being, in general, more phenotypically plastic than their native range counterparts. Indeed, behavioral plasticity may buffer individuals against novel selection pressures experienced within the invasive range by allowing them to rapidly adjust their behavior to current environmental conditions51. This flexibility in behavior may promote population stability and persistence, particularly during the early stages of invasion, where invaders are characterized by small population sizes that are susceptible to environmental and demographic stochasticity51,52. Whether the increased behavioral plasticity seen in invasive populations is an evolved adaptive response or is due to evolutionarily novel conditions experienced by invaders during development is not clear and will require further research.Collectively, these results emphasize the importance of behavior in invasion biology and suggest that biological invasions may favor increasingly risk-prone individuals that are particularly adept at altering their behavior in response to environmental change. This may pose a particular threat to invaded communities. Indeed, previous research has found associations between risk-prone behavioral types and competitive ability53,54, with potential implications for predator-prey interactions55,56, as well as community structure and trophic cascades57. Thus, selection for increasingly exploratory, active, and bold invaders may have an outsized effect on local native species. Given obvious consequences for invasion dynamics, our findings also underscore the importance of monitoring sensitive trade routes for potential stowaways if more risk-taking individuals have a higher propensity to become accidentally transported beyond their native range. More generally, as invasive species are a leading driver of global biodiversity loss6,58, understanding how invasion selects for specific behavioral phenotypes in invading populations may allow us to better predict the effects of alien species on invaded communities.MethodsThe research was conducted in accordance with all relevant ethical approvals (University of California, Davis Animal Ethics Committee Protocol No. 211194, Monash University School of Biological Sciences Animal Ethics Committee Protocol No. 16736, Massey University Animal Ethics Committee Protocol No. MUAEC17/76) and collection permits (Hawaii: K2019-4044cc and EX-19-18, Lord Howe Island: LHIB 09/18, Australia: SL102160 [NSW] and 10008946 [VIC]).Study sites and animal husbandryWe collected 520 delicate skinks (Lampropholis delicata) from 14 populations across the species’ native (mainland Australia) and invasive (New Zealand, Lord Howe Island and Hawaii) range between November 2015 and August 2019 (see Supplementary Table 1). More specifically, we collected skinks from four sites across their native Australian range (n = 167; range = 27–81 lizards from each site), four sites across their invasive range in New Zealand (n = 143; range = 31–43 lizards from each site), and three sites each within the skinks’ introduced range on both Lord Howe Island (n = 92; range = 26–36 lizards from each site) and Hawaii (n = 118; range = 39–42 lizards from each site). Lizards were sourced from populations in both urban and non-urban sites within each native and invasive region. However, our previous research has found no general effect of urbanization on the exploratory behavior, activity, and boldness of delicate skinks59. Indeed, consistent patterns of behavior were found in the current study across multiple independent invasive lineages, all with varying degrees of urbanization, suggesting that urbanization likely played a minor role, if any, in explaining skink behavioral responses. All skinks were caught using hand capture, mealworm fishing, and passive trapping. These trapping methods have previously shown to not retain any sampling bias towards exploratory, active, or bold skinks60. Lizards were measured for snout-vent length (SVL) using digital calipers and marked with unique permanent identification codes using Visual Implant Elastomer (Northwest Marine Technology, Shaw Island, WA, U.S.A.). We collected adult, male skinks with complete tails (i.e., SVL > tail length) to avoid the well-documented effects of tail loss34,61 and gravidity62 on Lampropholis behavior. Skinks were transported in small groups within temporary housing containers via a combination of car and air travel back to animal facilities at either the Center for Aquatic Biology and Aquaculture, University of California, Davis (Hawaiian populations), Massey University, Auckland (New Zealand populations), or Monash University, Melbourne (Australian and Lord Howe Island populations). Transport times from the field site | Biology |
A study that followed thousands of people over 25 years has identified proteins linked to the development of dementia if their levels are unbalanced during middle age. From a report: The findings, published in Science Translational Medicine on 19 July, could contribute to the development of new diagnostic tests, or even treatments, for dementia-causing diseases. Most of the proteins have functions unrelated to the brain. "We're seeing so much involvement of the peripheral biology decades before the typical onset of dementia," says study author Keenan Walker, a neuroscientist at the US National Institute on Aging in Bethesda, Maryland. Equipped with blood samples from more than 10,000 participants, Walker and his colleagues questioned whether they could find predictors of dementia years before its onset by looking at a person's proteome -- the collection of all the proteins expressed throughout the body. They searched for any signs of dysregulation -- when proteins are at levels much higher or lower than normal.
The samples were collected as part of an ongoing study that began in 1987. Participants returned for examination six times over three decades, and during this time, around 1 in 5 of them developed dementia. The researchers found 32 proteins that, if dysregulated in people aged 45 to 60, were strongly associated with an elevated chance of developing dementia in later life. It is unclear how exactly these proteins might be involved in the disease, but the link is "highly unlikely to be due to just chance alone," says Walker.
The samples were collected as part of an ongoing study that began in 1987. Participants returned for examination six times over three decades, and during this time, around 1 in 5 of them developed dementia. The researchers found 32 proteins that, if dysregulated in people aged 45 to 60, were strongly associated with an elevated chance of developing dementia in later life. It is unclear how exactly these proteins might be involved in the disease, but the link is "highly unlikely to be due to just chance alone," says Walker. | Biology |
Inside a science park lab next to the University of York, two clusters of robots are busy moving clear plates with mechanical arms as they screen many millions of molecules. The machines need only 24 hours to complete work that would usually take teams of human scientists several days. The lab is run by Aptamer Group, a small biotech firm that has quietly carved out a leading position in the development of a highly sought after technology. Its scientists create aptamers – fragments of DNA, also known as synthetic antibodies, that are used to diagnose illnesses, or to deliver drugs to their target to fight a range of diseases including cancer.“An aptamer is a short synthetic piece of DNA or RNA that folds into three-dimensional shapes and sticks to targets of interest,” Dr David Bunka, the company’s chief technical officer, explains on our lab tour. The word comes from the Latin “aptus”, to fit.As the aptamers, or nucleic acid molecules, pass through a suite of hi-tech labs, they are tested according to how well they bind to proteins or other cellular targets. The best binders are then trimmed and checked by the quality control team before the final product is made on a larger scale, purified, and put into tubes for packing and shipping.The business counts three-quarters of the world’s top 20 pharmaceutical companies among its clients, including Japan’s largest drugmaker, Takeda. It is working with the UK’s top drugs firm, AstraZeneca, on kidney disease treatments; with Cancer Research UK to develop aptamers as targeted treatments for chronic myelomonocytic leukaemia, a rare type of blood cancer; with South Korea’s PinotBio to develop precision chemotherapy treatments; and with Gene Therapeutics in the US to create gene therapies.Aptamer Group’s £81m flotation on London’s junior Aim market last year turned its two founders into paper millionaires with a combined fortune of £33m. The company is rapidly expanding. It has just moved into a new headquarters in the York science park that has doubled its lab space – but is struggling to recruit more scientists, in part because of a lack of affordable housing in York, and the difficulty of hiring Europeans following Brexit.The group has developed a test that can detect Covid-19 in wastewater with the environmental technology group Deepverge and are now pursuing tests for other contaminants. It dropped a partnership with Mologic to develop a rapid lateral flow Covid test last year because it realised the market was “saturated”.Dr Arron Tolley, the chief executive of Aptamer Group, which has grown to employ about 60 people. Photograph: Christopher Thomond/The GuardianThe business was founded in 2008 by Bunka, a molecular biologist, and Dr Arron Tolley, an early school leaver with ADHD, who became a bricklayer and later completed a doctorate in biophysics and molecular biology. They met at Leeds university and formed a “bromance”, says Tolley.Aptamers, like antibodies, bind specifically to a target molecule and can be used to deliver drugs to tumour cells as precision chemotherapy, for example, or to identify cancer cells in samples for diagnostic purposes, with no binding to healthy cells.However, antibodies have to be generated inside living beings such as rabbits, mice, goats or sheep, by injecting an animal with a target of interest such as a virus, which will trigger an immune response. By contrast aptamers are produced using cutting-edge synthetic DNA technology.Bunka says: “The key principles of aptamer selection are taking a library of aptamers, incubating them with the target molecules, separating the aptamers that stick from those that don’t, and amplifying the ones that stick, creating a new generation of binders.”Aptamers are also more stable than antibodies and have a longer shelf life; they can be kept in the fridge while antibodies require freezer storage.Nick Turner, professor in bioanalytical chemistry at De Montfort University in Leicester, who works with aptamers, says: “They are faster to produce than antibodies, but one of the key things is the ethical advantages: because they are synthesised artificially, you haven’t got to go through the process of using animal models.The synthesis lab at Aptamer. Photograph: Christopher Thomond/The Guardian“They are environmentally stable, more robust, and so much easier to handle than antibodies. They are also relatively easy to label. You can add fluorescent tags to them so they are easy to detect.”Aptamers are also far more effective. Antibodies fail 50% of the time, research from several journals including Nature has shown, while Aptamer Group’s synthetic antibodies have a 70% success rate, according to internal data. The firm is able to develop aptamers within 15 days if needed, although it typically takes 10 to 12 weeks, while antibodies take anywhere from four to 18 months to generate.The global aptamer market was worth $2.4bn (£2.0bn) last year and is forecast to rake in annual revenues of $11.5bn by 2030, according to the market research firm Fact.MR.“Nucleic acid technology as a whole is a massively growing area, and aptamers are one of the key players in that field,” Turner says. “The UK has always been a strong science leader. In the area of molecular recognition including aptamers … we are one of the world leaders.”Aptamer Group – which was set up in the basement of Tolley’s former home in Leeds – employs almost 60 people today, including 35 scientists. Its new, three-storey building can accommodate up to 100 people but the company is struggling to find staff – in particular, chemists, quality control scientists and project managers.“The biggest challenge we’ve got is recruitment,” says Alastair Fleming, the chief operations officer. “Hiring is difficult, even bringing in basic scientists. Brexit has affected us as well. We normally expect more Europeans.” Britain’s departure from the EU has created more paperwork, and waiting times of about six months for a work visa.Fleming says it’s hard “to get the right-calibre graduate in”, with a shortage of affordable housing in York, where property prices have risen 9% in the past year while rents surged 21%. The average asking price in York has reached £368,878 while the average asking rent has risen to £1,378 a month, according to the property website Rightmove. Many of Aptamer Group’s scientists come from local universities – York, Leeds, Huddersfield or Edinburgh.Described by Tolley as a “true bootstrap company”, it managed to grow on £5m of funding until its stock market debut last year, including from a local angel investor, some government money and £14,000 each from Tolley’s and Bunka’s parents.For a company of its size, London’s junior market Aim was the only option, but Tolley does not rule out shifting to Nasdaq in the US one day. “You see a lot of companies go to the US because there are more funding opportunities. It’s not a cheap industry to be in,” he says.A House of Commons science and technology committee report in 2013 talked of the “valley of death” that prevents the progress of science from the lab bench to a commercially successful product, and despite Rishi Sunak’s declared ambition to turn the UK into a “science and technology superpower”, this lack of venture funding for companies as they grow is still a big problem, Tolley says.The shares have fallen about 50% since the IPO, similar to other biotech stocks, “impacted by the macro backdrop and poor sentiment for growth”, says Liberum analyst Edward Thomason. “However, Aptamer continues to deliver against its IPO mandate to scale the business and grow its business development pipeline.”Tolley, Bunka and the rest of the management team have been locked in to their shareholdings for a year, and Tolley says they have no intention of selling them. | Biology |
Continuous cough monitoring: a novel digital biomarker for TB diagnosis and treatment response monitoring
No Abstract
Document Type: Letter to the Editor
Affiliations: 1: UCSF Center for Tuberculosis, University of California San Francisco, San Francisco, CA, USA, Division of Pulmonary and Critical Care Medicine, San Francisco General Hospital, University of California San Francisco, San Francisco, CA, USA 2: Infectious Diseases Research Collaboration, Kampala, Uganda 3: De La Salle Medical and Health Sciences Institute, Center for Tuberculosis Research, City of Dasmariñas, Cavite, The Philippines 4: Hanoi Lung Hospital, Hanoi, Vietnam 5: Department of Pulmonary Medicine, Christian Medical College, Vellore, India 6: DSI-NRF Centre of Excellence for Biomedical Tuberculosis Research, and SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, Faculty of Medicine and Health Sciences, Stellenbosch University, Tygerberg, Cape Town, South Africa 7: Department of Medicine, Makerere University College of Health Sciences, Kampala, Uganda 8: Vietnam National Tuberculosis Control Program, Hanoi, Vietnam 9: DSI-NRF Centre of Excellence for Biomedical Tuberculosis Research, and SAMRC Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, Faculty of Medicine and Health Sciences, Stellenbosch University, Tygerberg, Cape Town, South Africa 10: Division of Infectious Diseases and Tropical Medicine, Center of Infectious Diseases, Heidelberg University, Heidelberg, Germany, German Center for Infection Research (DZIF), Heidelberg University Hospital Partner Site, Heidelberg, Germany
Publication date: 01 March 2023
The International Journal of Tuberculosis and Lung Disease (IJTLD) is for clinical research and epidemiological studies on lung health, including articles on TB, TB-HIV and respiratory diseases such as COVID-19, asthma, COPD, child lung health and the hazards of tobacco and air pollution. Individuals and institutes can subscribe to the IJTLD online or in print – simply email us at [email protected] for details.
The IJTLD is dedicated to understanding lung disease and to the dissemination of knowledge leading to better lung health. To allow us to share scientific research as rapidly as possible, the IJTLD is fast-tracking the publication of certain articles as preprints prior to their publication. Read fast-track articles.
- Editorial Board
- Information for Authors
- Subscribe to this Title
- International Journal of Tuberculosis and Lung Disease
- Public Health Action
- Ingenta Connect is not responsible for the content or availability of external websites
- Access Key
- Free content
- Partial Free content
- New content
- Open access content
- Partial Open access content
- Subscribed content
- Partial Subscribed content
- Free trial content | Biology |
Laser-controlled intracellular flows in temperature-sensitive biological cells
Micromanipulation techniques are widely adopted in materials science, colloidal physics and life sciences for various applications, ranging from nanostructure assembly and particle trapping to spatio-temporal analysis of cell organization. Introduced optically induced thermoviscous flows, i.e. focused-light-induced cytoplasmic streaming (FLUCS), can manipulate the cytoplasm in cells and developing embryos.
Thermoviscous flows arise from the complex interplay of thermal expansion and temperature-induced viscosity changes when repeatedly moving a heating point stimulus through a thin fluid film. Specifically, the localized heating generated by scanning the IR-laser spot through the sample induces a small local change in the density and viscosity of the fluid, resulting in a locally compressible fluid flow with a net transport of tracers.
Although FLUCS has the advantage of generating directional flows with highly reduced invasiveness, samples still experience some temperature modulation. This could affect highly heat-sensitive systems, such as thermosensitive mammalian cells.
In a new paper published in eLight, a team of scientists led by Professor Moritz Kreysing from Karlsruhe Institute of Technology developed nearly isothermal scan sequences that could open a path for new soft matter and biomedicine avenues.
The research team showed that a step-by-step optical strategy could disentangle laser-induced flows and heating. They use previously disregarded degrees of freedom that accompany the optical generation of flow fields. The temperature distribution is homogenized over the region of interest (ROI) by introducing additional counter-directed paths, symmetrically arranged around the desired trajectory.
In particular, the team has exploited symmetry relations on up to three distinct time scales. It led to locally homogeneous temperature distributions while inducing directional flows with flow lines that are largely unaltered compared to standard FLUCS. Additionally, the sample is cooled to the required temperature with an external Peltier cooling system.
The researchers have demonstrated that this technology, called isothermal-FLUCS (ISO-FLUCS), is associated with at least a 10-fold reduction in heating while achieving thermoviscous flows whose magnitudes well exceed endogenous streaming in Caenorhabditis elegans zygotes.
Given its drastically reduced heating impact while retaining FLUCS' main features, the researchers believe that ISO-FLUCS will become the new standard for these optical micromanipulations in highly temperature-sensitive systems in biology and materials science.
ISO-FLUCS can drastically improve the temperature homogeneity inside samples (standard deviation reduced up to 20 times). These results were achieved by implementing three new ingredients to the well-established FLUCS: (i) flow and heating disentanglement, (ii) multi-timescale scan sequence symmetrization, and (iii) residual high-order field cancelation.
Neutral scan lines were conveniently included in the pattern to flatten the temperature gradient in the region of interest (hundreds of µm2). Spatiotemporally symmetric scan sequences ensured predictable and highly symmetric thermoviscous flow fields. The heating level in conventional FLUCS was previously demonstrated to be tolerable in sensitive biological systems, such as developing C. elegans zygotes.
The clean disentanglement of flows and local heating that ISO-FLUCS offers will not only become the new unimpeachable standard for precise flow manipulation inside a living specimen. It also lays the foundations to specifically address different classes of physical stimuli relating to mechanics and forces and temperature-dependent biochemistry of rate equations. This will advance flow-driven microrheology by eliminating any temperature-dependent material responses caused by the measurements.
As ISO-FLUCS operates over a very narrow temperature range that can be extremely well controlled, it is also expected to find wide use in studying temperature-sensitive polymeric or particulate hydrogels. The accurate determination of the sol-gel transition is of utmost importance for understanding emerging properties on the macroscale. The fine temperature control in ISO-FLUCS can also be used to investigate the spinodal decomposition of many systems that exhibit a high propensity towards phase separation.
The research team believe that ISO-FLUCS will replace FLUCS in becoming the new standard for such laser-induced optical micromanipulations in the most sensitive samples. Additionally, ISO-FLUCS resonates with a rapidly growing community to harness the power of temperature stimuli in manipulating colloidal and living systems on the micro- and nanoscale. In the future, the team foresees ISO-FLUCS paving the way to medical use cases, such as laser-assisted human reproductive medicine.
More information: Antonio Minopoli et al, ISO-FLUCS: symmetrization of optofluidic manipulations in quasi-isothermal micro-environments, eLight (2023). DOI: 10.1186/s43593-023-00049-z
Provided by Chinese Academy of Sciences | Biology |
Subsets and Splits