Angry Birds maker is Hatch-ing a new subscription streaming game service on Android

Angry Birds maker is Hatch-ing a new subscription streaming game service on Android

The mobile platform puts on emphasis on social gaming.

Rovio’s Angry Birds heyday may long be over, but it’s not out of the game just yet. Starting next year, spin-off company Hatch will launch a new subscription streaming service on Android that looks to change the way we play games on our phones.

Instead of downloading what you want to play, users will select from a variety of games streaming inside the Hatch app. About 100 titles are promised at launch—including Badland, Cut the Rope 2, Leo’s Fortune, and Monument Valley, as well as some Hatch originals—and there will never be any need to update or unlock via in-app purchases. If you’re worried about the performance on your Galaxy S7, the company promises “highly-advanced cloud-based server technology” will keep games running smoothly as you move through levels.

But the Finland-based service isn’t meant just as a time killer—it’s designed to be a true social experience. Since everything is streamed, players can join at any time, and any single-player game can be turned into a multi-player one, where gamers can collaborate and compete, as well as broadcast their sessions. The service will be available in two tiers: free with ads or as a paid subscription with additional benefits. As far as how developers will get paid, Hatch founder and CEO Juhani Hokala simply says, “Leave the monetization to us.” 

Update 4:45pm: The Communications Director from Hatch clarified a couple of things in an email to us. Developers will be paid based on play time—the longer you play a particular game, the more its publisher earns. Bandwidth use is claimed to be “roughly equivalent to streaming hi fidelity music over a service like Spotify.”

The impact on you: Whether it’s to kill time on our commute or trying to catch that elusive Pokemon, we all play games on our phones, and the prospect of being able to share that experience with friends near and far is intriguing. But aside from the cost of the service, it remains to be seen what kind of affect this will have on our data plans. Streaming games on the go sounds like it could be a huge gigabyte suck, which would quickly take the fun out of it.

This story, "Angry Birds maker is Hatch-ing a new subscription streaming game service on Android " was originally published by Greenbot.

Here's what your hardware needs for the AWS Greengrass IoT service

Here's what your hardware needs for the AWS Greengrass IoT service

Amazon's Greengrass offline IoT service will work on Qualcomm's Snapdragon 410c board and should work with Raspberry Pi 3

Amazon is bringing a bit of its AWS magic to devices and board computers with its Greengrass IoT service, which will help boost offline data collection and analysis.

The goal of Greengrass, an AWS software tool, is to make IoT devices and maker boards smarter. Even underpowered devices collecting data won't be "dumb" anymore, Amazon says.

Amazon has kept in mind that smart devices can't always be connected to the cloud for data analysis, and Greengrass brings some AWS software tools to devices to aid in better collection and analysis of data.

Developer boards are strongly tied to cloud services, which add more functionality to smart devices. Data collected from sensors are typically dispatched and collected in the cloud, where it can be analyzed and can define the next steps.

Smarter sorting of data on IoT devices could speed up the analysis, and ensure the right data is sent to the cloud. Sending loads of useless IoT data to cloud services can eat up bandwidth and cost money.

For example, data from sensors in industrial equipment can be analyzed to improve manufacturing or cut down on injuries. In the petroleum industry, sensors can be used to collect data that could effectively nail down the geographic location of oil reserves.

A robot or drone operator could offload basic training models to the devices to help in movement and navigation without being connected to the cloud.

"Code running in the field can collect, filter, and aggregate freshly collected data and then push it up to the cloud for long-term storage and further aggregation," Amazon said in a blog post. "Further, code running in the field can also take action very quickly, even in cases where connectivity to the cloud is temporarily unavailable."

Many developer boards are being used for IoT. Raspberry Pi started off as a hobbyist board but is now tied to cloud services like Microsoft's Azure and IBM's Bluemix. Greengrass will work with a large number of developer boards.

Greengrass has some hardware requirements. It requires minimum memory of 128MB of memory and a 1GHz ARM or x86 CPU. A user needs to run the Amazon Linux or the Ubuntu OS. Qualcomm has said Greengrass will work on its Snapdragon 410c chip, and the offline IoT service likely work on Raspberry Pi, which meets the minimum requirements.

Other major developer boards like Orange Pi and Pine64 also meet the minimum requirements. Samsung's Artik developer board is tied tightly to Artik Cloud but is not friendly to other cloud services.

Amazon said Greengrass will work on Intel hardware. However, the company could not immediately say if Intel developer boards like Joule would work with Greengrass.

Qualcomm said it believes Greengrass will be used by large-scale manufacturers making devices that rely on AWS cloud.

AWS cloud services are wildly popular, and with Greengrass, more IoT devices could rely on the company's cloud.

Google may be testing out a new card-based layout for the Play Store

A new design is making an appearance for some users, though it's unclear at this point how widespread the rollout is.

Google is apparently testing a new layout for the Play Store. The evidence comes from a video and several posts on Google+ by those who apparently have seen the new design scheme appear.

At first glance the layout isn’t any major change, but as a shared video demonstrates it allows you to scroll through different apps as if they were a series of cards. Putting information into cards is a hallmark of Google’s Material Design, which continues to evolve and find new ways to liven up the company’s software.

The following video first appeared on a Google+ post, with other users chiming in to say they’d seen the updates as well.

We haven’t seen the change ourselves, so this could very well be an A/B test or some other type of slow rollout. If it ends up going out more widely or Google makes an official announcement, we’ll be sure to let you know.

Why this matters: Google is constantly innovating on the Play Store design and functionality to try and drive more app installs and a smoother experience. The latest reveal could do both, as the swiping motion is pleasant and will help you more quickly evaluate different apps to check out. 

This story, "Google may be testing out a new card-based layout for the Play Store" was originally published by Greenbot.

Enterprises start to migrate critical legacy workloads to the cloud

Enterprises start to migrate critical legacy workloads to the cloud

After gaining cloud experience, they look to make the bigger moves

LAS VEGAS -- Now that major enterprises have gotten their feet wet with smaller cloud projects, they're beginning to focus on migrating large, critical legacy workloads.

That's the take from Stephen Orban, head of enterprise strategy at Amazon Web Services (AWS).

In an interview with Computerworld at the annual AWS re:Invent conference here this week, Orban said the next wave of cloud computing could be focused strategically on legacy migration.

And while it's always tougher - and riskier -- to move big, mission-critical workloads and services, at least IT departments have gotten experience working with the cloud so they're not going in cold.

"The pace and the deliberate focus on how much they want to migrate has increased substantially across a lot more customers," Orban said. "Capital One has teams dedicated to... migrating existing workloads. We're seeing companies who increasingly have made AWS the new normal, but sometimes they're hamstrung by how much time they have to spend on their legacy systems.... They want to start migrating."

Zeus Kerravala, an analyst with ZK Research, said this point in the development of cloud computing reminds him of virtualization in the late 1990s. "The initial wave of adoption then was about companies trying new things, not mission-critical workloads," he said. "Once organizations trusted the technology, major apps migrated. Today, virtualization is a no brainer because it's a known technology with well-defined best practices. Cloud computing is going through the same trend today. "

It was smart for companies to start out experimenting with the cloud and trying new things with non-mission critical workloads. Now, it's time to move on bigger projects.

"Now that companies are starting to trust the cloud, expect to see faster, broader adoption," said Kerravala. "Eventually, we won't think Does this work in the cloud?' because we know it will."

He noted that early adopters are, naturally, jumping first in terms of moving legacy systems. Once other companies see how that goes, they'll likely follow.

"The problem with legacy workloads is they often need to be re-written," said Kerravala. "We might see some 'lift and shift' happening, where a workload is picked up and put in the cloud, but ultimately that app needs to be rewritten to be cloud native."

Focusing on a migration strategy is a natural progression and an interesting one for many companies that have been confused about how to jump into the cloud.

During the opening keynote earlier today, AWS CEO Andy Jass said he found many businesses thinking that the cloud was an all-or-nothing proposition. Jassy said AWS has worked to let customers know it's OK to run a hybrid shop with some workloads in the cloud and some on premise

"Any IT organization that's been running its own operation for some period of time is going to have hybrid as its journey," said Orban. "We're doing everything we can to provide help to them."

One of the biggest challenges of a major migration involves the people more than the tech, according to Orban. IT workers might be hesitant to learn new cloud technology and expand their skills.

That's one of the first issues IT executives need to tackle.

"For every IT professional in the world the cloud is the biggest opportunity for people to learn new skills that will benefit them for a long time," said Orban. "But people are afraid of what they don't know. Anxieties will cause a bit of a delay in how quickly an organization is able to move."

To get started, execs should put a training and certification program in place.

"There's building muscle memory, becoming cloud fluent, the ability to make better faster decisions about migration strategies," said Orban.

This story, "Enterprises start to migrate critical legacy workloads to the cloud" was originally published by Computerworld.

2016 will be 1 second longer: Google can help you cope

2016 will be 1 second longer: Google can help you cope

The company will let others use it time servers to ride out a "leap second" on Dec. 31

Like a man eager to show off his new watch, Google is encouraging anyone running IT operations to ask it for the time.

The company will let anyone use its NTP (Network Time Protocol) servers, a move to help IT shops cope with the next “leap second,” which will be tacked onto 2016 just after midnight on Dec. 31.

Leap seconds help to keep clocks aligned with Earth’s rotation, which can vary due to geologic and even weather conditions. But an extra second can wreak havoc with applications and services that depend on systems being tightly synchronized.

Most Internet-connected devices get their time through NTP, an open-source technology that's used all over the world. NTP has its own problems, mainly around funding, but it's long been the standard. Google runs its own NTP servers and uses them to ease its systems through leap seconds, according to Michael Shields, technical lead on the company’s Time Team, in a blog post on Wednesday.

Time synchronization is critical for many things Google’s systems do, such as keeping replicas up to date, determining which data-affecting operation happened last, and correctly reporting the order of searches and clicks, the company says.

Ordinary operating systems can’t accommodate a minute that’s 61 seconds long, so some organizations have used special-case workarounds for the extra second. But sometimes these methods raise issues, like what happens to write operations that take place during that second. At times in the past, some Google systems have refused to work when faced with a leap second, though this didn’t affect the company’s services, a Google representative said.

So Google will modify its NTP servers to run clocks 0.0014 percent slower for 10 hours before the leap second and for 10 hours afterward. When the leap second takes place, they will have accounted for it already. Google’s been using this technique, called “smeared time,” since a leap second in 2008.

Enterprises running virtual-machine instances on Google Compute Engine, and those using Google APIs, will want to keep their own systems synchronized with Google’s slightly slower clocks during that 20-hour period. Client systems will also have to be set to that time in order to work with those servers. And it won’t work to run some servers on smeared time and some on regular time, because then clients won’t know which time to follow, Google says.

So the company is making its NTP servers available free through the Google Public NTP service. Users can take advantage of the service by configuring their network settings to use time.google.com as their NTP server. The company laid out detailed instructions for synchronizing systems to its smeared time.

Google won’t be the only company smearing time on Dec. 31. Akamai plans to slow down its clocks over a 24-hour period around the leap second. Amazon and Microsoft have done the same thing in the past.

In fact, the big cloud companies look ready to standardize on a 24-hour “leap smear.” Google plans to use the longer transition for the next leap second, partly to ease more slowly into extra second and partly to align itself with other companies. There’s no date yet for the next leap second, but Google expects it to come in 2018.

Leap seconds began in 1972 and are now administered by the International Earth Rotation and Reference Systems Service (IERS). They’re needed because Earth’s rotation isn’t uniform. It’s affected by things like tides in the oceans and the movement of magma beneath the Earth’s crust. Atomic clocks, which set the standard for most timekeeping, are more consistent than that.

Amazon will literally truck your data into its cloud

Amazon will literally truck your data into its cloud

Its new 'Snowmobile' data truck offers 100PB of data transfer

It can be hard moving large amounts of data to the cloud. Even with consistent 10 Gbps of data transfer, it would take years to get hundreds of petabytes from an on-premises data center to a public cloud provider.

Amazon is aiming to speed that process up with a high-capacity data transfer product: a literal truck. The Snowmobile is a big, white semi-trailer that can hold 100PB of data. It will then get driven to an Amazon endpoint, and the data will be loaded into its public cloud storage.

For smaller migrations that can also benefit from processing at the edge, Amazon also announced a new Snowball Edge appliance that provides 100TB of storage, local compute power, and migration for handling data transfer and processing.

These new products, announced Wednesday, are aimed at helping companies get large amounts of data into Amazon’s cloud, which will then encourage them to stick with the company's services going forward. That’s especially important because the company charges for data egress, and the costs of a full migration to a competing cloud provider could be too much for some businesses to bear.

For companies that want to move large swaths of data, the Snowmobile could be a useful tool. Amazon will back it up to a customer’s data center, and the truck can handle data transfers of up to 1TB per second by hooking up multiple 40Gbps fiber connections. The Snowmobile can be filled in about 10 days at maximum speed, according to a blog post from AWS evangelist Jeff Barr.

What's more, the truck is waterproof and can be parked in either covered or uncovered locations.

Amazon takes the security of the trailer seriously — the company can provide security for the Snowmobile while it's located at a customer's data center and will provide an escort for the data to its destination. Each container will provide GPS tracking as well, and users' data is encrypted.

The trucks are already being used, including with one large customer who is undergoing a "pretty gigantic" migration, AWS CEO Andy Jassy said during a press conference.

The Snowball Edge, by contrast, is designed to be a more easily portable and compute-heavy appliance. Like its namesake, the Snowball migration appliance, it’s a ruggedized storage device that comes with an e-ink shipping label to get it back to Amazon for data transfer into AWS.

It holds 100TB of data, compared to the Snowball’s 80TB, and also sports a touchscreen for interacting with the device. That’s useful because the device is also able to handle data processing on-device using AWS Lambda functions, meaning the Snowball Edge is able to provide analytics on-device. The appliance can also do all its own data encryption, which makes transfer operations faster.

The Snowball Edge was designed to be useful for situations like research boats that don't have internet connectivity, Jassy said during his keynote address at Amazon's Re:Invent conference. A Snowball Edge can collect data, slice down a subset of key information for on-premises processing, and then send the rest of the data to Amazon's cloud.

The device was among a fleet of services that Amazon announced at Re:Invent, including new machine learning-driven APIs and major updates to its compute services.

Microsoft puts quantum computing higher on its hardware priority list

Microsoft puts quantum computing higher on its hardware priority list

The company is stepping up efforts to make quantum computing hardware and software

Microsoft is accelerating its efforts to make a quantum computer as it looks to a future of computing beyond today’s PCs and servers.

Microsoft has researched quantum computing for more than a decade. Now the company’s goal is to put the theory to work and create actual hardware and software.

To that effect, Microsoft has put Todd Holmdahl—who was involved in the development of Kinect, HoloLens, and Xbox—to lead the effort to create quantum hardware and software. The company has also hired four prominent university professors to contribute to the company’s research.

Quantum computers, in theory, can significantly outperform today’s supercomputers. The ultimate goal is to create universal quantum computers that can run all existing programs and conduct a wide range of calculations, much like today’s computers. Early quantum computers can be used to run only a limited number of applications.

Companies like IBM, D-Wave, and Google are researching quantum computing. IBM researchers have said a universal quantum computer is still decades out, so their focus is on creating hardware targeted at solving specific problems.

D-Wave and IBM have created quantum computers based on different theories, and the companies have bashed each other’s designs. D-Wave is trying to get more programmers to test its hardware so it can be used for more applications.

It’s not known when Microsoft’s quantum hardware will come out. Like others, Microsoft will have to make quantum circuits on which it can test applications and tackle issues like error correction, fault tolerance, and gating. Practical hardware will be released only after a number of quantum computing issues are resolved. But Microsoft is already offering a simulation of quantum computers via a software toolkit.

Conventional computers represent data in the forms of 1s and 0s, but quantum computers are far more complex. At the center of quantum computers are qubits, which can harness the laws of quantum mechanics to achieve various states. A qubit can hold a one and zero simultaneously and expand to states beyond that.

Qubits allow quantum computers to calculate in parallel, making them more powerful than today’s fastest computers. But qubits can be fragile, and interference from matter or electromagnetic radiation can wreck a calculation.

Researchers at Microsoft are working on an entirely new topological quantum computer, which uses exotic materials to limit errors. There are still questions about the viability of such materials and outcomes, so it could take a long time for Microsoft to make practical quantum circuits.

Interest in quantum computing is growing as it becomes difficult to manufacture smaller chips to speed up PCs and servers. Neuromorphic chips and quantum circuits represent a way to move computing into the future.

Microsoft’s new hires include Leo Kouwenhoven, a professor at the Delft University of Technology in the Netherlands; Charles Marcus, a professor at the University of Copenhagen; Matthias Troyer, a professor at ETH Zurich; and David Reilly, a professor at the University of Sydney in Australia. All of them are retaining their professor titles.

Open-source hardware makers unite to start certifying products

Open-source hardware makers unite to start certifying products

The Open Source Hardware Association certification could help buyers hack into and copy products

Four years ago, Alicia Gibb was trying to unite a fragmented open-source hardware community to join together to create innovative products.

So was born the Open Source Hardware Association, which Gibb hoped would foster a community of hardware "hackers" sharing, tweaking, and updating hardware designs. It shared the ethics and ethos of open-source software and encouraged the release of hardware designs -- be it for it processors, machines, or devices -- for public reuse.

Since then, OSHWA has gained strength, with Intel, Raspberry Pi, and Sparkfun endorsing the organization. Its growth has coincided with the skyrocketing popularity of Arduino and Raspberry Pi-like developer boards -- many of them open source -- to create gadgets and IoT devices.

In recent weeks, OSHWA also met one of its initial goals: to start certifying open-source hardware. The goal of certification is to clearly identify open-source hardware separate from the mish-mash of other hardware products. The certification allows hardware designs to be replicated.

For certification, OSHWA requires hardware creators to publish a bill-of-materials list, software, schematics, design files, and other documents required to make derivative products. Those requirements could apply to circuit boards, 3D printed cases, electronics, processors, and any other hardware that meets OSHWA's definition of open-source hardware.

When hardware makers fill out a legally binding agreement, they are allowed to use an Open Hardware mark. OSHWA will host a directory for all certified products, something that doesn't exist today because the community is so fragmented.

"Users feel more confident about a product when they can see how it works," Gibb said. "Knowing your product's privacy features, compatibility with other tools, and ease of customization can encourage buyers to choose you over a competitor."

Open-sourcing hardware offers other benefits, Gibb said. The typical patent process for hardware can be costly and time-consuming, and open sourcing hardware can instead get a product to market more quickly without a giant financial burden.

That's especially relevant at a time when more individual makers are swiftly creating compelling products at home. Cheap commodity components are powerful enough to create what could be the next big hit product.

Some notable open hardware products certified by OSHWA in just a few weeks include the BeagleBone Black Wireless and a number of other boards from SparkFun. The list will grow over the coming months. There are many open-source developer boards, like MinnowBoard, Orange Pi, and 96boards' single-board-computers, that could be registered.

Products being certified also include 3D printed devices. 3D Central has certified 3D printable over-ear headphones and has published the print files, documented CAD, and assembly instructions.

"If a customer buys a pair of our headphones, the certification provides a way for a user to easily access the documentation," said Andrew Sink, director of business development at 3D Central. For example, buyers can easily "replace the foam pads when they wear out or reprint a part that's broken," he added.

As a maker himself, Sink is excited about the open-source hardware directory because it shows existing products he can incorporate into projects. The certification solves a problem of attribution for the creator.

"Most of our designs are published using Creative Commons Share-Alike copyright license, and it is always painful when our designs are sold by competitors who do not provide the attribution," Sink said.

For products that are Open Hardware certified, the logo effectively is an attribution that stays with the product, Sink said.

OSHWA has built credibility in the open-source hardware community through its popular Open Hardware Summit. The organization has been endorsed by universities, makers, companies, and hacker spaces.

With the certification, hardware makers will feel a sense of belonging to the community, said Michael Weinberg, a board member for OSHWA and intellectual property lawyer and general counsel at Shapeways.

"People want to be associated with open source," Weinberg said.

For OSHWA, certifying products could set in motion the creation of a central resource for open-source hardware. But like open-source software licensing, it's a long process. Unity among the makers on the definition of open-source hardware will be important.

Some approvals for products that don't deserve certification could initially slip through the cracks, but the vetting process to ensure hardware is really open source will intensify over time, Weinberg said.

The certification has limits, however. Open Hardware is only a certification, and it won't protect companies from getting sued for copyright infringement, Weinberg said.

If another hardware maker alleges an Open Hardware product infringes copyright, then OSHWA will talk to its maker about expectations and definition of open-source hardware. The organization will learn and grow, he said.

Gibb has more plans for OSHWA. She wants to ask the U.S. Patent and Trademark Office to refer to the open-source product directory as a source of prior art. 

"Prior art is what allows open source hardware to be recognized and [blocks] the ability to patent the same work. It is integral to open-source hardware," Gibb said.

Once the definition of Open Hardware matures, the certification directory could become a full-fledged open-source hardware repository, Gibb said. For now, there are no such plans because creating and managing a repository is a huge task.

"It may take a while for the community to determine what the core functions and features of a repository would be, but the certification has started that conversation," Gibb said.

U.S. sets plan to build two exascale supercomputers

U.S. sets plan to build two exascale supercomputers

Both systems, using different architectures, will be developed simultaneously in 2019 -- if the Trump administration goes along with the plan

The U.S believes it will be ready to seek vendor proposals to build two exascale supercomputers—costing roughly $200 million to $300 million each— by 2019.

The two systems will be built at the same time and will be ready for use by 2023, although it’s possible one of the systems could be ready a year earlier, according to U.S. Department of Energy officials.

But the scientists and vendors developing exascale systems do not yet know whether President-Elect Donald Trump’s administration will change directions. The incoming administration is a wild card. Supercomputing wasn’t a topic during the campaign, and Trump’s dismissal of climate change as a hoax, in particular, has researchers nervous that science funding may suffer.

At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama’s administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that “pointed-head geeks are not going to be well appreciated.”

Another person in the audience, John Sopka, a high-performance computing software consultant, asked how the science community will defend itself from claims that “you are taking the money from the people and spending it on dreams,” referring to exascale systems.

Paul Messina, a computer scientist and distinguished fellow at Argonne National Labs who heads the Exascale Computing Project, appeared sanguine. “We believe that an important goal of the exascale computing project is to help economic competitiveness and economic security,” said Messina. “I could imagine that the administration would think that those are important things.”

Politically, there ought to be a lot in HPC’s favor. A broad array of industries rely on government supercomputers to conduct scientific research, improve products, attack disease, create new energy systems and understand climate, among many other fields. Defense and intelligence agencies also rely on large systems.

The ongoing exascale research funding (the U.S. budget is $150 million this year) will help with advances in software, memory, processors and other technologies that ultimately filter out to the broader commercial market.

This is very much a global race, which is something the Trump administration will have to be mindful of. China, Europe and Japan are all developing exascale systems.

China plans to have an exascale system ready by 2020. These nations see exascale—and the computing advances required to achieve it— as a pathway to challenging America’s tech dominance.

“I’m not losing sleep over it yet,” said Messina, of the possibility that the incoming Trump administration may have different supercomputing priorities. “Maybe I will in January.”

The U.S. will award the exascale contracts to vendors with two different architectures. This is not a new approach and is intended to help keep competition at the highest end of the market. Recent supercomputer procurements include systems built on the IBM Power architecture, Nvidia’s Volta GPU and Cray-built systems using Intel chips.

The timing of these exascale systems—ready for 2023—is also designed to take advantage of the upgrade cycles at the national labs. The large systems that will be installed in the next several years will be ready for replacement by the time exascale systems arrive.

The last big performance milestone in supercomputing occurred in 2008 with the development of a petaflop system. An exaflop is a 1,000-petaflop system and building it is challenging because of the limits of Moore’s Law, a 1960s-era observation that noted the number of transistors on a chip doubles about every two years.

“Now we’re at the point where Moore’s Law is just about to end,” said Messina in an interview. That means the key to building something faster “is by having much more parallelism, and many more pieces. That’s how you get the extra speed.”

An exascale system will solve a problem 50 times faster than the 20-petaflop systems in use in government labs today.

Development work has begun on the systems and applications that can utilize hundreds of millions of simultaneous parallel events. “How do you manage it—how do you get it all to work smoothly?” said Messina.

Another major problem is energy consumption. An exascale machine can be built today using current technology, but such a system would likely need its own power plant. The U.S. wants an exascale system that can operate on 20 megawatts and certainly no more than 30 megawatts.

Scientists will have to come up with a way “to vastly reduce the amount of energy it takes to do a calculation,” said Messina. The applications and software development are critical because most of the energy is used to move data. And new algorithms will be needed.

About 500 people are working at universities and national labs on the DOE’s coordinated effort to develop the software and other technologies exascale will need.

Aside from the cost of building the systems, the U.S. will spend millions funding the preliminary work. Vendors want to maintain the intellectual property of what they develop. If it cost, for instance, $50 million to develop a certain aspect of a system, the U.S. may ask the vendor to pay 40% of that cost if they want to keep the intellectual property.

A key goal of the U.S. research funding is to avoid creation of one-off technologies that can only be used in these particular exascale systems.

“We have to be careful,” Terri Quinn, a deputy associate director for HPC at Lawrence Livermore National Laboratory, said at the SC16 panel session. “We don’t want them (vendors) to give us capabilities that are not sustainable in a business market.”

The work under way will help ensure that the technology research is far enough along to enable the vendors to respond to the 2019 request for proposals.

Supercomputers can deliver advances in modeling and simulation. Instead of building physical prototypes of something, a supercomputer can allow modeling virtually. This can speed the time it takes something to get to market, whether a new drug or car engine. Increasingly, HPC is used in big data and is helping improve cybersecurity through rapid analysis; artificial intelligence and robotics are other fields with strong HPC demand.

China will likely beat the U.S. in developing an exascale system, but the real test will be their usefulness.

Messina said the U.S. approach is to develop an exascale eco-system involving vendors, universities and the government. The hope is that the exascale systems will not only a have a wide range of applications ready for them, but applications that are relatively easy to program. Messina wants to see these systems quickly put to immediate and broad use.

“Economic competitiveness does matter to a lot of people,” said Messina.

This story, "U.S. sets plan to build two exascale supercomputers" was originally published by Computerworld.

Time is running out for NTP

Time is running out for NTP

Everyone benefits from Network Time Protocol, but the project struggles to pay its sole maintainer or fund its various initiatives

There are two types of open source projects: those with corporate sponsorship and those that fall under the “labor of love” category. Actually, there’s a third variety: projects that get some support but have to keep looking ahead for the next sponsor.

Some open source projects are so widely used that if anything goes wrong, everyone feels the ripple effects. OpenSSL is one such project; when the Heartbleed flaw was discovered in the open source cryptography library, organizations scrambled to identify and fix all their vulnerable networking devices and software. Network Time Protocol (NTP) arguably plays as critical a role in modern computing, if not more; the open source protocol is used to synchronize clocks on servers and devices to make sure they all have the same time. Yet, the fact remains that NTP is woefully underfunded and undersupported.

NTP is more than 30 years old—it may be the oldest codebase running on the internet. Despite some hiccups, it continues to work well. But the project’s future is uncertain because the number of volunteer contributors has shrunk, and there’s too much work for one person—principal maintainer Harlan Stenn—to handle. When there is limited support, the project has to pick and choose what tasks it can afford to complete, which slows down maintenance and stifles innovation.

“NTF’s NTP project remains severely underfunded,” the project team wrote in a recent security advisory. “Google was unable to sponsor us this year, and currently, the Linux Foundation’s Core Internet Initiative only supports Harlan for about 25 percent of his hours per week and is restricted to NTP development only.”

Last year, the Linux Foundation renewed its financial commitment to NTP for another year via the Core Infrastructure Initiative, but it isn’t enough.

The absence of a sponsor has a direct impact on the project. One of the vulnerabilities addressed in the recently released ntp-4.2.8p9 update was originally reported to the project back in June. In September, the researcher who discovered the flaw, which could be exploited with a single, malformed packet, asked for a status update because 80 days had passed since his initial report. As the vulnerability had already existed for more than 100 days, Magnus Studman was concerned that more delays gave “people with bad intentions” more chances to also find it.

Stenn’s response was blunt. “Reality bites—we remain severely under-resourced for the work that needs to be done. You can yell at us about it, and/or you can work to help us, and/or you can work to get others to help us,” he wrote.

Researchers are reporting security issues, but there aren’t enough developers to help Stenn fix them, test the patches, and document the changes. The Linux Foundation’s CII support doesn’t cover the work on new initiatives, such as the Network Time Security (NTS) and the General Timestamp API, or on standards and best practices work currently underway. The initial support from CII covers “support for developers as well as infrastructure support.”

NTS, currently in draft version with the Internet Engineering Task Force (IETF), would give administrators a way to add security to NTP, as it would secure time synchronization. The mechanism uses Datagram Transport Layer Security (DTLS) to provide cryptographic security for NTP. The General Timestamp API would develop a new time-stamp format containing more information than date and time, which would be more useful. The goal is to also develop an efficient and portable library API to use those time stamps.

Open source projects and initiatives struggle to keep going when there isn’t enough support, sponsorship, financial aid, and manpower. This is why open source security projects frequently struggle to gain traction among organizations. Organizations don’t want to wind up relying on a project when future support is uncertain. In a perfect world, open source projects that are critical parts of core infrastructure should have permanent funding.

NTP is buried so deeply in the infrastructure that practically everyone reaps the project’s benefits for free. NTP needs more than simply maintaining the codebase, fixing bugs, and improving the software. Without help, the future of the project remains uncertain. NTP—or the Network Time Foundation established to run the project—should not have to struggle to find corporate sponsors and donors.

“If accurate, secure time is important to you or your organization, help us help you: Donate today or become a member,” NTP’s project team wrote.

NativeScript deepens ties to Angular 2

NativeScript deepens ties to Angular 2

The JavaScript framework's 2017 road map includes accommodations for Chrome tools and Windows 10

NativeScript, Progress Software's framework for building native mobile apps with JavaScript, will be tweaked for performance and debugging with an upcoming upgrade. Further integration with the Angular 2 JavaScript framework is in the works as well.

NativeScript 2.5, due in January, will feature ahead-of-time compilation to improve boot-up time on Android devices, said Todd Anglin, chief evangelist at Progress. The upgrade also will be fully integrated with Chrome developer tools for debugging and working with NativeScript apps. Such capabilities as step debugging, in which developers walk through code one line at a time, and UI tree inspection will be available.

Windows 10 support will be added next year so that developers can share the same code they use on iOS and Android on Windows mobile units. Also on tap are polyfills enabling use of technologies such as the canvas 2D Web API. "We want to enable that canvas code to work inside of a native app or a NativeScript app," said Anglin.

Deeper integration with the Angular 2 framework and community, meanwhile, involves using Angular's command line interface and debugging tools, such as Augury. Angular 2 brings options for performing common tasks like navigating between views and binding views to data, Anglin noted. (Google released Angular 2 in September, and it is already planning Angular 3 for a March 2017 release.)

For performance, NativeScript builds native UIs, driven by JavaScript code running in a virtual machine. "It's not actually being cross-compiled into Swift or into Java or anything else like that," Anglin said. "It's the actual JavaScript running in this virtual machine, which actually can deliver very high performance." A native-to-JavaScript bridge translates between JavaScript and native API calls and vice versa.

San Franciscoโ€™s Muni transit system reportedly hit by ransomware

San Francisco’s Muni transit system reportedly hit by ransomware

The ransomware attacker is said to be demanding $73,000

San Francisco’s Muni transit system was reportedly hit by ransomware since Friday, leading to the message “You Hacked, ALL Data Encrypted” being displayed on the computer screens at stations, according to newspaper reports.

The message asked that cryptom27 at yandex.com should be contacted for the key to unlock the data.

Fare payment machines at stations also displayed that they were “out of service,” and San Francisco's Municipal Railway, widely known as Muni, was allowing free rides on its light-rail vehicles as it was unable to charge customers, according to the Examiner.

The San Francisco Municipal Transportation Agency could not be immediately reached for comment on Sunday.

The ransomware is believed to be a variant of HDDCryptor, which uses commercial tools to encrypt hard drives and network shares, according to CSO’s Salted Hash. Trend Micro said in September that the malware is a threat both to consumers and enterprises as it not only "targets resources in network shares such as drives, folders, files, printers, and serial ports via Server Message Block (SMB), but also locks the drive."

On Sunday, the San Francisco Examiner was reporting that the computer systems at the transit system had been restored following the Friday malware attack. It said that a person who may have spread the ransomware was demanding $73,000 from Muni to unlock its data.

It isn’t clear at this point whether the transit system paid up to unlock its data or took other measures. The bitcoin wallet the attacker referred to in email communications referenced by Salted Hash was still empty late Sunday, suggesting that no payment was made at least into that wallet.

GitLab looks to transform app testing

GitLab looks to transform app testing

The Review Apps feature lets developers create temporary apps for reviewing merge requests before moving to production

GitLab is providing "ephemeral app environments" to enhance application testing on its code repository platform with its Reviews Apps capability being unveiled Tuesday.

Review Apps lets developers create temporary applications to see how existing code works and to review merge requests before shipping into production. This way, code does not need to pulled down locally. An extension of GitLab's CI capabilities, the feature is being added in GitLab 8.14.

With Review Apps, developers can test and demo new features, while product managers can see what a merge request would look like. They are created dynamically when a new branch is pushed up to GitLab and are automatically deleted when the branch is.

"We believe that with Review Apps, a number of tasks development teams do today could become optional,"said Mark Pundsack, head of product at GitLab. "For example, the development stage, used to view review changes in the environment a developer is pushing to, likely goes away since Review Apps spin up a live environment for each merge request."

Staging also could become could become optional. "For most teams, staging is often the first time designers and product managers get to see and click through the effects of the code developers have written," said Pundsack. "With Review Apps designers and product managers can give feedback earlier in the process by using the Review Apps preview link to see the live changes."

Developers can test if changes work without doing any additional work beyond submitting a merge request, and designers, product managers, and quality assurance engineers won't have to check out branches or spin up a staging environment to preview changes. Review Apps also means it will be easier to give feedback, GitLab said.

Microsoft puts quantum computing higher on its hardware priority list

Microsoft puts quantum computing higher on its hardware priority list

The company is stepping up efforts to make quantum computing hardware and software

Microsoft is accelerating its efforts to make a quantum computer as it looks to a future of computing beyond today's PCs and servers.

Microsoft has researched quantum computing for more than a decade. Now the company's goal is to put the theory to work and create actual hardware and software.

To that effect, Microsoft has put Todd Holmdahl -- who was involved in the development of Kinect, HoloLens, and Xbox -- to lead the effort to create quantum hardware and software. The company has also hired four prominent university professors to contribute to the company's research.

Quantum computers, in theory, can significantly outperform today's supercomputers. The ultimate goal is to create universal quantum computers that can run all existing programs and conduct a wide range of calculations, much like today's computers. Early quantum computers can be used to run only a limited number of applications.

Companies like IBM, D-Wave, and Google are researching quantum computing. IBM researchers have said a universal quantum computer is still decades out, so their focus is on creating hardware targeted at solving specific problems.

D-Wave and IBM have created quantum computers based on different theories, and the companies have bashed each other's designs. D-Wave is trying to get more programmers to test its hardware so it can be used for more applications.

It's not known when Microsoft's quantum hardware will come out. Like others, Microsoft will have to make quantum circuits on which it can test applications and tackle issues like error correction, fault tolerance, and gating. Practical hardware will be released only after a number of quantum computing issues are resolved. But Microsoft is already offering a simulation of quantum computers via a software toolkit.

Conventional computers represent data in the forms of 1s and 0s, but quantum computers are far more complex. At the center of quantum computers are qubits, which can harness the laws of quantum mechanics to achieve various states. A qubit can hold a one and zero simultaneously and expand to states beyond that.

Qubits allow quantum computers to calculate in parallel, making them more powerful than today's fastest computers. But qubits can be fragile, and interference from matter or electromagnetic radiation can wreck a calculation.

Researchers at Microsoft are working on an entirely new topological quantum computer, which uses exotic materials to limit errors. There are still questions about the viability of such materials and outcomes, so it could take a long time for Microsoft to make practical quantum circuits.

Interest in quantum computing is growing as it becomes difficult to manufacture smaller chips to speed up PCs and servers. Neuromorphic chips and quantum circuits represent a way to move computing into the future.

Microsoft's new hires include Leo Kouwenhoven, a professor at the Delft University of Technology in the Netherlands; Charles Marcus, a professor at the University of Copenhagen; Matthias Troyer, a professor at ETH Zurich; and David Reilly, a professor at the University of Sydney in Australia. All of them are retaining their professor titles.

Future Windows 10 phones could run full-fledged PC programs

Future Windows 10 phones could run full-fledged PC programs

ARM and x86: the holy grail?

When the HP Elite x3 launched earlier this year, we lamented its likely legacy as the last great Windows 10 phone. It stood alone as the embodiment of Microsoft’s PC-as-phone vision at a time when Microsoft was ruthlessly burning its mobile hardware division to the ground and gutting what few Nokia remnants lingered. But now it appears that the HP Elite x3’s highlight feature—the ability to run PC software on a phone—may actually find its way into Windows 10 Mobile’s core at some point in the future.

Frequent Windows sleuth WalkingCat dredged up hints of Windows 10's ability to emulate x86 (read: PC) software on ARM (read: mobile) processors, via a “CHPE” designation in code.  

walkingcat chpe

Mary Jo Foley, a Windows reporter with impeccable sources, followed up on the report today. Foley says “CHPE” indeed refers to Microsoft plans to introduce x86 emulation to Windows 10 in a “Redstone 3” update in fall 2017. The “C” stands for “Cobalt,” Microsoft’s code name for x86 emulation, according to her sources; “HP” literally stands for the company HP; and “E” remains unclear, but potentially stands for “emulation.”

So why does this matter? Because native x86 software support would dramatically improve the utility of Continuum, Windows 10 Mobile’s flagship feature. Continuum allows you to use your Windows phone like a PC when you connect it to an external display and keyboard—but right now, the only software that works in Continuum mode are Universal Windows Platform apps, which are limited in number and don’t include many key programs demanded by business users and hardcore PC enthusiasts.

Even the Elite x3 runs its x86 PC apps in a virtualized cloud environment, rather than on-device.

The idea of emulating full-fledged PC programs on mobile devices sounds challenging, especially since much of the software that pros rely on tends to be resource-hungry. Avoiding performance or battery-life penalties could prove difficult. But working x86 apps mixed with ARM’s legendary power efficiency could be a computing holy grail if Microsoft manages to pull it off.

The story behind the story: “Technically, there are really two things that are unique about Windows Mobile,” Window chief Terry Myerson said in an interview with ZDNet late October. “One is cellular connectivity and the other one is the ARM processors that are there. So we’re going to continue to invest in ARM and cellular. And while I’m not saying what type of device, I think we’ll see devices there, Windows devices, that use ARM chips. I think we’ll see devices that have cellular connectivity.”

So sure, this x86 emulation tidbit—if true—keeps the dream of the fabled Surface Phone alive. But reading between Myerson’s words, Windows 10 Mobile’s future may not even necessarily include phones.

This story, "Future Windows 10 phones could run full-fledged PC programs" was originally published by PCWorld.

Microsoft embraces open source in the cloud and on-premises

Microsoft embraces open source in the cloud and on-premises

Microsoft is positioning itself as the software vendor of choice for enterprises that maintain hybrid cloud environments, and it's opening its arms to Linux and open source software to do it.

With the announcement of a broad swathe of new data products and services at Microsoft Connect in New York City last week -- including that the next release of SQL Server will support Linux (and Docker) --  the software giant has signaled a renewed focus on customer choice and flexibility, underscoring the increasing importance of cloud computing as a central pillar of its business.

"We've been on this journey for the last few years now," says Rohan Kumar, general manager, Database Systems, Microsoft. "It's really a company about choice right now. We really want to meet customers where they are."

Microsoft has offered multiple flavors of Linux on its Azure cloud public cloud platform and infrastructure for several years now.

"Microsoft loves Linux," Microsoft CEO Satya Nadella said during the 2014 announcement of new Azure services. "Twenty percent of Azure is already Linux. We will always have first-class support for Linux [distributions]."

Redmond meets The Linux Foundation

Microsoft took that love another step last week. In a move that would have been stunning more than a decade ago, it joined The Linux Foundation — which sponsors the work of Linux creator Linus Torvalds and plays a central role in the promotion of open source software — as a platinum sponsor.

"As a cloud platform company, we aim to help developers achieve more using the platforms and languages they know," Scott Guthrie, executive vice president, Microsoft Cloud and Enterprise Group, said in a statement last week. "The Linux Foundation is home not only to Linux, but many of the community's most innovative open source projects. We are excited to join The Linux Foundation and partner with the community to help developers capitalize on the shift to intelligent cloud and mobile experiences."

As part of its sponsorship, John Gossman, architect on the Microsoft Azure team, is joining the board of directors of The Linux Foundation.

Once the tireless defender of proprietary software against the insurgency of the open source model, Microsoft has come around in recent years. It contributes to a variety of Linux Foundation and Apache Software Foundation projects, including the Linux Foundation's Node.js Foundation, OpenDaylight, Open Container Initiative, R Consortium and Open API Initiative. It also maintains a repository of its own open source code.

"Microsoft has grown and matured in its use of and contributions to open source technology," Jim Zemlin, executive director of The Linux Foundation, said in a statement last week. "The company has become an enthusiastic supporter of Linux and of open source and a very active member of many important projects. Membership is an important step for Microsoft, but also for the open source community at large, which stands to benefit from the company's expanding range of contributions."

It's messy out there

The reason, Microsoft's Kumar says, is simple: In the messy, real world of enterprise IT, hybrid shops are the norm and customers don't need or want vendors to force their hands when it comes to operating systems. Serving these customers means giving them flexibility.

That philosophy has spread from Microsoft's cloud business to its on-premises infrastructure business as the company seeks to make support for hybrid environments a key differentiator of its cloud and on-premise offerings (an idea Nadella pushed as Microsoft's executive vice president of Cloud and Enterprise before his ascension to CEO). Last week, Joseph Sirosh, corporate vice president of the Data Group at Microsoft, announced that the next release of SQL Server would, for the first time, support Linux.

"Now you can also develop applications with SQL Server on Linux, Docker or macOS (via Docker) and then deploy to Linux, Windows, Docker, on-premises or in the cloud," Sirosh wrote in a blog post. "This represents a major step in our journey to making SQL Server the platform of choice across operating systems, development languages, data types, on-premises and in the cloud."

Kumar adds that customers tell Microsoft, "I want to use SQL and don't care about what's underneath it. I don't want to worry about it, I just want to know that whenever I want to install SQL, I have the choice to do that."

All major features of the SQL Server relational database engine are coming to Linux, Sirosh said, including advanced features such as in-memory online transactional processing (OLTP), in-memory columnstores, Transparent Data Encryption, Always Encrypted and Row-Level Security. There will be native Linux installations with familiar RPM and APT packages for Red Hat Enterprise Linux, Ubuntu Linux and SUSE Linux Enterprise Server. He noted that the public preview of the next release of SQL Server, in both Windows and Linux flavors, will be available on Azure Virtual Machines and as images on Docker Hub.

In addition, as a further sign of its commitment to flexibility, Sirosh announced SQL Server 2016 SP1, a service pack that introduces a consistent programming model across SQL Server editions, meaning programs written to exploit in-memory OLTP, in-memory columnstore analytics and partitioning will work across the Enterprise, Standard and Express editions.

"Developers will find it easier than ever to take advantage of innovations such as in-memory databases and advanced analytics — you can use these advanced features in the Standard Edition and then step up to Enterprise for mission critical performance, scale and availability — without having to re-write your application," Sirosh wrote.

Microsoft has also published its JDBC Connector as 100 percent open source and updated its ODBC for PHP driver and launched a new ODBC for Linux connector, all to make it easier to work with Microsoft SQL-based technologies regardless of the underlying OS. Additionally, Microsoft VSCode users can now connect to SQL Server, including SQL Server on Linux, Azure SQL Database and Azure SQL Data Warehouse. The company has also updated its SQL Server Management Studio, SQL Server Data Tools and Command line tools to support SQL Server on Linux.

"I'm excited about Microsoft as a company truly embracing choice," Kumar says. "We're clearly seeing the base getting energized in a big way. People are giving us a chance again."

This story, "Microsoft embraces open source in the cloud and on-premises" was originally published by CIO.

Symantec acquisition has lawsuit-filled past

Symantec acquisition has lawsuit-filled past

Identity protection firm LifeLock settled with the FTC for $100M a year ago over false advertising claims

LifeLock, the identity protection vendor that Symantec today said it would acquire for $2.3 billion, has been the frequent target of lawsuits filed by customers, state attorneys general and the Federal Trade Commission (FTC).

Less than a year ago, LifeLock paid $100 million to settle a contempt complaint brought by the FTC. That complaint originated when the agency charged LifeLock with failing to comply with a 2010 order and settlement over accusations that the Arizona company had again engaged in false advertising and failed to implement promised security measures to safeguard customers' personal information.

The $100 million was a record amount obtained by the FTC in an order enforcement action.

Symantec today said that it was confident that LifeLock's troubles were behind it. "We are thoroughly satisfied that any previous issues are in the past," said a company spokeswoman in an email reply to questions. "Consumers vote with their wallets and there are 4.4 million happy and committed LifeLock customers -- and growing."

But not every past LifeLock customer was happy.

Consumers began suing LifeLock in 2008, claiming that the company engaged in false advertising and deceptive trade practices. Earlier that year, credit reporting bureau Experian sued LifeLock for placing false fraud alerts on consumers' credit-history files.

In 2010, LifeLock settled with the FTC and 35 state attorneys general over fraudulent advertising charges, paying $12 million in the process. Much of that money was returned to consumers. The federal agency had accused LifeLock of overstating the benefits of its service, and used "scare tactics" to gain subscribers.

"This was a fairly egregious case of deceptive advertising," said then-FTC Chairman Jon Leibowitz at the time.

Last year, the FTC said that LifeLock had not abided by the settlement of five years before. "The fact that consumers paid LifeLock for help in protecting their sensitive personal information makes the charges in this case particularly troubling," said Edith Ramirez, the current chairwoman of the FTC, in a December 2015 statement.

Under the settlement, $68 million went to consumers who had joined a class action lawsuit against LifeLock. Those checks went out last month. An Arizona federal court holds the remaining $32 million.

In addition to the $100 million payment, last year's settlement extended LifeLock's record-keeping requirements, an integral part of the original agreement, until 2023. Today, Symantec said it would assume responsibilities for overseeing LifeLock's business practices once the acquisition is finalized.

The $2.3 billion deal is expected to close early in 2017.

This story, "Symantec acquisition has lawsuit-filled past" was originally published by Computerworld.

New ZUK Edge photos appear to show little signs of a curved panel

New ZUK Edge photos appear to show little signs of a curved panel

New ZUK Edge photos appear to show little signs of a curved panel

After a few hints here and there and an official visit to TENAA, we are already quite certain that the ZUK Edge will be the next handset offer to come out of Lenovo's subsidary. The phone's retail box already leaked, but now we get to see the actual Edge in the wild.

ZUK Edge ZUK Edge
ZUK Edge

The device was spotted on Weibo in a total of three stills. Two of them are quite blurry and reveal little extra detail. However, they do show the handset booting with a ZUK logo and then running, what is presumed to be an Android 6.0 Marshmallow ROM.

The third photo appears to be a lot more interesting. It offers a closeup view of the top part of the phone and thus a more detailed look at the design. Better still, the unit in question is white, as opposed to the black one TENAA received for its obligatory shots. This makes it a lot easier to make out the design and oddly enough, from the looks of things, it appears the Edge moniker might have been misunderstood.

The ZUK Edge seems to have thin bezels all around, but the display doesn't seem curved, at least not a Samsung way. Still, there is the possibility that the angle is deceitful and that Lenovo has gone with a ZTE-style approach of optically simulating a curve near the edges. This would definitely cost less than a true flexible panel and would fit better with the value nature of the ZUK brand.

As for specs, the ZUK Edge is powered by Snapdragon 821 SoC and the aforementioned panel is said to be 5.5-inches in diagonal and with a resolution of 1080p. RAM is 4GB, while storage options include 32GB and 64GB. The device features a 13MP rear camera and an 8MP front shooter. It's 7.68mm thick and packs in a 3,000mAh battery.

The ZUK Edge is expected to launch on the local Chinese market towards the end of this year, so it shouldn't be long now.

Samsung Galaxy J3 (2017) press image shows up

Samsung Galaxy J3 (2017) press image shows up

Samsung Galaxy J3 (2017) press image shows up

Samsung's hugely successful J-series are due for a 2017 refresh and a lower-midrange Samsung Galaxy J3 (2017) has popped up on Geekbench, then FCC, and most recently received Bluetooth certification. Well, time for a press photo, courtesy of @evleaks.

Revolutions are out of the question, naturally, and the upcoming model looks similar to the J3 (2016). Perhaps slightly more rounded corners, and a relocated front-facing camera won't change much.

Mind you, this is the SM-J327P model, as opposed to the SM-J3119, which is commonly known as Galaxy J3 Pro, and has been available since mid-summer. The J3 (2017), on the other hand, is said to sport a Snapdragon 430 chipset and 2GB of RAM, and run on Android Marshmallow.

New Samsung Galaxy Note7 update encourages users to get replacement/refund

New Samsung Galaxy Note7 update encourages users to get replacement/refund

New Samsung Galaxy Note7 update encourages users to get replacement/refund

Over a week after Samsung Galaxy Note7 units in Canada started receiving a battery limiting update, another update has started rolling out that aims to encourage users of the device to get a replacement or refund.

Specifically, Bell has started pushing out the update to Galaxy Note7 users on its network. The change-log for the update - which arrives as firmware N930W8VLU2APK1 - says that "this software update will include indicators to encourage customers to contact Samsung regarding replacement/return of affected device."

It's likely that other Canadian carriers will also push out a similar update soon.

OnePlus discontinues OnePlus 3 in US and Europe

OnePlus discontinues OnePlus 3 in US and Europe

OnePlus discontinues OnePlus 3 in US and Europe

Shortly after it seemed like the OnePlus 3 could be back in stock soon, OnePlus has confirmed that it will no longer be selling the smartphone in the United States and Europe. The device will be replaced by the newly-unveiled OnePlus 3T in these regions.

The OnePlus 3T comes with a somewhat larger battery (3,400mAh), extra storage option (128GB), an updated CPU (Snapdragon 821), and higher-resolution (16 MP) selfie-camera. The 64GB storage variant of the device costs $439, while the 128GB one carries a $479 price tag.

Qualcomm's upcoming Snapdragon 835 will have Quick Charge 4

Qualcomm's upcoming Snapdragon 835 will have Quick Charge 4

Qualcomm's upcoming Snapdragon 835 will have Quick Charge 4

Qualcomm has announced that it will be partnering with Samsung to manufacture the upcoming Snapdragon 835 mobile processor. It will be based upon Samsung's newest 10nm FinFET process, making it the smallest of its kind.

According to Qualcomm, the new 10nm FinFET process is up to 30 percent more area efficient and offers 27 percent improved performance or 40 percent less power consumption. The reduced footprint will allow hardware manufacturers to make smaller devices or include other components.

The new processor will also be the first to include the new Quick Charge 4. Qualcomm claims Quick Charge 4 provides 20 percent faster charging and 30 percent higher efficiency than Quick Charge 3, thanks to its Dual Charge parallel charge technology. It also supports USB-C and USB Power Delivery standards, which, we assume, means you will be able to fast charge USB-PD devices such as last year's Nexus and the new Pixel phones, as well as the new USB-C MacBooks using a Quick Charge 4 charger.

Quick Charge 4 includes third generation INOV or Intelligent Negotiation for Optimum Voltage, which now provides real-time thermal management. Qualcomm is also introducing two new power management ICs, the SMB1380 and the SMB1381, which have low impedance, up to 95% peak efficiency, and advanced fast charging features such as battery differential sensing.

The Snapdragon 835 is expected to be in devices in the first half of 2017.

Samsung Galaxy S4 mini getting new security update

Samsung Galaxy S4 mini getting new security update

Samsung Galaxy S4 mini getting new security update

The Samsung Galaxy S4 mini has started receiving a new update. Currently rolling out to Vodafone-branded units in Europe, the update - which arrives as firmware version XXUCPI1 - brings along Android security patch for the month September.

Given that the roll out has just begun, it may take some time before you see an update notification on your device. Meanwhile, if you feel impatient, you can manually check for the update by heading to your handset's Settings menu.

ZUK Z2 will get Android 7.0 Nougat update soon

ZUK Z2 will get Android 7.0 Nougat update soon

ZUK Z2 will get Android 7.0 Nougat update soon

We're slowly, but surely starting to see more and more Android device makers start updating their products to Nougat these days. And the latest smartphone to have such an update in the pipeline may not be what you expected.

The ZUK Z2, launched by the Lenovo-owned brand this June, is going to be graced with an official Android 7.0 build. Not only that, but the update to Nougat is going to become available in the near future.

Although we don't have anything more specific to go on just yet, this is a much needed confirmation of an update being worked on for the device. And it comes straight from ZUK CEO Chang Chen, so there's no doubting its authenticity. The executive has even shared the screenshot you can see to the left - showing an Android 7.0 Nougat build running on his ZUK Z2.

So it's all good news for owners of the Z2, but what about the Z2 Pro? Unfortunately at this time we have no news about a Nougat update for that model, but it would make little sense for the Z2 to get the new Android version and the Z2 Pro not to.

New rumor says Samsung Galaxy C5 Pro and C7 Pro will be made official next month

New rumor says Samsung Galaxy C5 Pro and C7 Pro will be made official next month

New rumor says Samsung Galaxy C5 Pro and C7 Pro will be made official next month

According to a new rumor out of China, Samsung will officially unveil the Galaxy C5 Pro and C7 Pro smartphones sometime in December. The rumor, which came in the form of a Weibo post, also revealed some key information about the devices.

It says the Galaxy C5 Pro will carry a model number of SM-C5010 and will be powered by Snapdragon 625 chipset. On the other hand, the Galaxy C7 Pro will have a model number of SM-C7010 and will pack in Snapdragon 626 SoC.

The Samsung Galaxy C7 Pro, in case you missed, was spotted entering India last month.

Nougat firmware for Samsung Galaxy Note5 and Galaxy Tab S2 is in the works as well

Nougat firmware for Samsung Galaxy Note5 and Galaxy Tab S2 is in the works as well

Nougat firmware for Samsung Galaxy Note5 and Galaxy Tab S2 is in the works as well

Earlier this week, there were reports that Samsung has started Nougat firmware development for the Galaxy S6 and S6 edge smartphones. Now, according to a new report, Nougat firmware for the Samsung Galaxy Note5 and Galaxy Tab S2 is in the works as well.

There's, however, no information on exactly when the update will be rolled out, with the report noting that it's likely going to take at-least a couple of months to get the update ready. So, that rules out a 2016 roll-out, which is the case with the Galaxy S6/S6 edge Nougat update as well.

Even the tech giant's latest Galaxy S series handsets - the Galaxy S7 and S7 edge, for which an official Nougat beta program is already underway - will also not receive the update until 2017.

โ€‹Today's tech skills redundant within a decade

​Today's tech skills redundant within a decade

Half of IT workers in a global survey believe their jobs will become automated and their current skills redundant

Almost half of the IT workers responding to a global survey believe that within 10 years their job will be automated, rendering their current skills redundant.

Recruiter Harvey Nash spoke to 3,245 tech professionals across 84 countries for its 2017 tech survey with 94 percent indicating that their career would be severely limited if they didn’t teach themselves new skills.

Bridget Gray, managing director at Harvey Nash APAC, told CIO Australia that technology careers are in a state of flux.

“With over 50 percent of respondents indicating that their jobs are likely to be automated, it is possible that 10 years from now the IT function will look vastly different. Even for those IT professionals relatively unaffected directly by automation, there is a major indirect effect – anything up to four in 10 of their work colleagues may be machines by 2027,” Gray says.

The chance of automation varies greatly with job role, according to the report. Testers and IT operations professionals are most likely to expect their job role to be significantly affected in the next decade (67 percent and 63 percent respectively). CIOs, VPs of IT and program managers will be least affected at 31 percent and 30 percent, respectively.

Despite the increase in automation, IT workers are in high demand with survey participants receiving at least seven ‘headhunt calls’ in the last 12 months. Software engineers and developers were in the most demand followed by analytics and big data roles.

Respondents expected artificial intelligence, augmented virtual reality and robotics as well as big data, cloud, and the internet of things to be the most important technologies in the next 5 years.

Learning a priority

IT workers are prioritizing learning over any other career development tactic with self-learning significantly more important to them than formal training or qualifications.

Read more:​Look out DevOps specialists, the Brazilians are coming

Only 12 percent indicated that “more training” is a key thing they want in their job while 27 percent saw gaining qualifications as a top priority in their career.

Meanwhile, respondents were also asked that if they were to change one thing about their workplace what would it be? More than seven percent said their boss, and nearly 15 percent said to be recognized for their contribution.

A further 29.9 percent wanted to work on more interesting projects, 10.4 percent wanted better job security, and 18.7 percent wanted a stronger team around them.

Agree or disagree? Within 10 years, a significant part of my job that I currently perform will be automated

Read more:Curtin Uni IT team pulls itself together

 

Agree

Program Management

30%

CIO, CTO or VP of IT

31%

Software Engineering

31%

Development Management / Team Leadership

34%

Project Management

37%

Architecture

39%

Business Analysis

44%

Developer

47%

Infrastructure Management / Team Leadership

51%

BI / Analytics

53%

IT operations

63%

Testing

67%

Source: Harvey Nash

This story, "​Today's tech skills redundant within a decade" was originally published by CIO Australia.

Web developers get their own browser

Web developers get their own browser

Blisk draws on Google's Chromium with tools for developing, debugging, and testing websites

With a belief that existing browsers were made for looking at the web and not for developers, Brisk has built a browser specifically focused on website development.

Based on Google’s Chromium open source browser project, Blisk features a toolbox for developing, debugging, and testing “modern” websites. Available via a subscription service, Blisk is in a 1.0 release, having completed a beta program. It is available for Windows and Mac.

Blisk is looking to solve a problem in which “millions of developers are suffering from setting up the development environment,” said co-founder Andrii Bakirov. “Developers need to download, set up, configure and maintain tens of different tools even before writing a single line of code. It could be different frameworks, tools, extensions and SaaS services,” he said. “To build fast and modern websites, [a] developer has to buy and set up this fragmented set of tools and then suffer from maintaining it.”

Blisk supports a variety of iOS and Android devices, with the intent of making life easier for developers. It provides a number of features, including emulation, and developers can preview a website on desktop and mobile simultaneously. It also offers navigation sync, in which a URL and scroll position are synchronized for mobile and desktop. In addition, Blisk refreshes pages every time a developer saves code changes, so there is no need to reload multiple tabs whenever code is altered. Pages are monitored for JavaScript errors, and developers can document technical issues via a one-click screenshot and record capability. Screenshots are saved to a user’s cloud storage to provide access to others.

Blisk pointed out differences it sees between its own technology and common browsers Chrome and Firefox. Blisk, proponents said, enables simultaneous development on desktop and mobile, boosts developer productivity, and provides developer-specific features for web development. Improvements under way include capabilities such as page analysis and improved emulation.

Google, Facebook will not place ads on sites distributing fake news

Google, Facebook will not place ads on sites distributing fake news

Some have voiced concern that the moves could increase the power of the internet companies

Google plans to update its AdSense program policies to prevent placement of its ads on sites distributing fake news.

Facebook also said Monday it had updated the policy for its Audience Network, which places ads on websites and mobile apps, to explicitly clarify that it applies to fake news.

“In accordance with the Audience Network Policy, we do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news,” Facebook said in a statement. The company said its team will continue to closely vet all prospective publishers and monitor existing ones to ensure compliance.

False news stories have become a sore point after the U.S. presidential elections with critics blaming internet companies like Twitter and Facebook for having had an influence on the outcome of the elections as a result of the fake content.

The controversy also reflects concerns about the growing power of social networks to influence people and events, as well as help people to communicate and organize. Facebook promotes democracy by letting candidates communicate directly with people, Facebook CEO Mark Zuckerberg said recently in an interview.

Google had its own embarrassing moments on Sunday with a false story that claimed that President-elect Donald Trump had won the popular vote in the U.S. presidential elections figuring atop some Google search results. Trump’s Democratic rival Hillary Clinton is leading in the popular vote.

“We’ve been working on an update to our publisher policies and will start prohibiting Google ads from being placed on misrepresentative content, just as we disallow misrepresentation in our ads policies,” Google said Monday in a statement. “Moving forward, we will restrict ad serving on pages that misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property.”

Google evidently expects that the threat of a cut in revenue from ads will dissuade sites from publishing fake content.

Zuckerberg has described as “crazy” the criticism that fake news on Facebook’s news feed had influenced the vote in favor of Trump. “Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes,” Zuckerberg said in a post over the weekend. The hoaxes are not limited to one partisan view, or even to politics, he added.

Identifying the “truth” is complicated, as while some hoaxes can be clearly identified, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted, or expresses a view that some people will disagree with and flag as incorrect even when it is factual, Zuckerberg wrote.

There are concerns that the monitoring of sites for fake news and the penalties could give internet companies more power. “We have to be wary of Facebook and Google being allowed to decide what’s ‘fake’ and what’s ‘true’ news. That only increases their power,” said Pranesh Prakash, policy director at the Centre for Internet & Society in Bangalore.

Report: Linux, NoSQL, Nginx set foundation for AWS app dominance

Report: Linux, NoSQL, Nginx set foundation for AWS app dominance

Sumo Logic research shows its customers favor AWS for building their apps, Redis for storing data, and Nginx for serving it all up

If you’ve built a new app in AWS, odds are you’re running it on Linux, with a NoSQL data store and Nginx to serve it to your users.

That’s one of the conclusions drawn by Sumo Logic, a cloud-based analytics service for application data that runs on Amazon Web Services, in its analysis of how its customers are putting together modern, cloud-based apps.

Sumo Logic’s report, entitled “The State of the Modern App in AWS,” uses statistics gathered from the company’s base of 1,200 customers to get an idea of how their apps are created and what they run on.

Amazon (almost) all the way

The report’s first and least surprising finding is that 73 percent of Sumo Logic’s customers have their apps hosted on AWS. The second-biggest slice of the pie isn’t even another cloud—it’s on-premise applications, with 22.4 percent. Everything else, including multicloud deployments, are a distant third at 4.6 percent.

Even if Sumo Logic wasn’t hosted on AWS (it is), this finding is in line with other reports that show AWS remains king of cloud environments. The platform commands fierce loyalty among its users—or at least provides strong disincentives to quit it.

Another no-shocker finding: The vast majority of apps run on Linux, even when taking into account variations across environments. Sumo Logic found that on AWS, 82 percent of apps are delivered on Linux; in on-premise environments, it’s around 46 percent, with the other half being on Windows. With Azure, the pyramid is inverted: 96 percent are on Windows, 4 percent on Linux.

But Sumo Logic found the AWS user base was 10 times the size of Azure's user base and three times the size of the on-prem users, so Linux still comes out far ahead by any measure.

Redis to store it, Nginx to serve it

For Sumo Logic customers, NoSQL is the way to go for data, and the top database technology is Redis, at 18.22 percent. Redis is likely No. 1 on that list because it covers multiple use cases needed by current-generation applications—in other words, it works as both a database and an in-memory cache, and its new “modules” ecosystem promises even more functionality in the future (such as machine learning).

Redis and the other two big database choices, MySQL and MongoDB, make up around 50 percent of the total database usage in AWS. The rest is spread between other NoSQL solutions (Cassandra, Dynamo, Memcached, Couchbase) and more conventional RDBMS solutions (PostgreSQL, Amazon Redshift, and Microsoft SQL Server). Only MySQL and PostgreSQL captured more than 10 percent of the user base—a major sign that for all of its shortcomings, MySQL’s broad base of existing support makes it an easy choice.

Web servers are another area where the choices for new apps seems clear. Nginx, a web server built with modern multithreaded workloads in mind and outfitted with scads of third-party add-ons, is used by a little more than 40 percent of Sumo Logic’s user base. Apache httpd—long regarded as the standard—is still in the running with 36.6 percent of users. But Nginx has been steadily displacing Apache httpd on high-volume, high-traffic sites, which is not surprising given that Nginx’s feature set is geared toward modern apps and provides features like native load balancing.

While IIS is in the running, with 21.9 percent, it is by definition confined to Windows Server boxes and thus only likely to grow in tandem with that OS. (Although, with Windows Server becoming friendlier to open source solution stacks in general, that’s not as firmly guaranteed as it might have been in the pre-Nadella era.)

Docker and AWS Lambda: Signs of life

If Linux, AWS, and NoSQL are obvious components of the modern application stack, so are two other major recent technologies: application containerization (Docker) and serverless architecture (AWS Lambda).

About one in five of Sumo Logic’s customers has Docker in production—which Sumo Logic touts as “significant adoption” of “a relatively new technology.” AWS Lambda has a smaller slice of the pie: 12.3 percent of users are employing it in production, with “cloud/devops deployment automation” as one use case cited by Sumo Logic.

There’s little question Docker has lionized a lot of the discussion around application development, delivery, and deployment of late. Serverless computing—the more general term for AWS Lambda's function—also has made a dent thanks to its promise to relieve developers of the burden of system maintenance.

What’s tough to discern is the pace of uptake is for either technology since this is the first time Sumo Logic has compiled information about its users. We’ll have to wait for its next report to get a better idea of how fast either of these are catching on—and how the rest of the new app stack is evolving.

China hints at retaliation against Apple and iPhones if trade war goes hot

China hints at retaliation against Apple and iPhones if trade war goes hot

State-backed newspaper promises 'tit-for-tat' if President-elect Donald Trump follows through with promised tariffs

China yesterday signaled that if President-elect Donald Trump follows through with campaign pledges to slap steep tariffs on goods imported into the United States, retaliation will result in shrunken iPhone sales.

In an op-ed piece published Sunday in Global Times—one of several newspapers controlled by the Communist Party—the editorial writers warned that higher tariffs imposed by the U.S. would trigger reprisals.

“China will take a tit-for-tat approach then,” the piece said of any tariff action by Trump after he takes office in January. “A batch of Boeing orders will be replaced by Airbus. U.S. auto and iPhone sales in China will suffer a setback, and U.S. soybean and maize imports will be halted.”

The op-ed writers did not spell out what steps the Chinese government might take to reduce in-country iPhone sales.

During the election campaign, Trump often took aim at the People’s Republic of China’s trade and currency practices. He regularly told supporters that his administration would levy what he called “defensive” tariffs as high as 45 percent on Chinese imports, and that he would order the Treasury Secretary to proclaim the People’s Republic of China (PRC) a currency manipulator.

The latter remained on Trump’s campaign website on a page dedicated to trade; there was no specific mention of the former, but the seven-point plan included, “Use every lawful presidential power ... including the application of tariffs consistent with Section 201 and 301 of the Trade Act of 1974 and Section 232 of the Trade Expansion Act of 1962.”

Presidents have substantial executive authority over tariffs during times of national emergency, but otherwise are limited by the laws Trump cited to temporarily boosting import duties by 15 percent. Higher duties can be ordered by the chief executive only during times of war or declared national emergencies.

The writers at the Party-backed Global Times called Trump’s promises “campaign rhetoric” and bet that he would renege on his tariff pledges. “Trump, as a shrewd businessman, will not be so naïve” as to fuel a trade war, they asserted.

Phones are highly susceptible to import tariffs. According to Caroline Freund, a senior fellow at the Peterson Institute for International Economics, the U.S. imports $40 billion in cell phones from China, or three-fourths of the total value imported.

“For most goods ... imports would just shift to other foreign suppliers if the United States were to greatly restrict trade with China,” Freund wrote in June. “But for 825 products, out of a total of about 5,000, adding up to nearly $300 billion, China supplies more than all our other trade partners combined.”

Phones topped that list, with laptops running slightly back, with $37.1 billion in imports to the U.S. from China. That represented 93 percent of all laptop imports.

China is a very important market to Apple. The $8.8 billion in revenue from the region in the September quarter represented 19 percent of Apple’s total. Yet revenue attributed to China has been down year-over-year for three consecutive quarters: In the third quarter, it was off 30 percent from the same period in 2015.

Last month, CEO Tim Cook blamed several factors for the continued slide in revenue from the region, but said the most important was the brisk sales of the larger iPhone 6 and 6 Plus models in 2014-15. “So when that upgrade rate in fiscal year 2016 returned to a more normal upgrade rate, which would be akin to what we saw with the iPhone 5S as a point, it had further to fall,” Cook contended.

This story, "China hints at retaliation against Apple and iPhones if trade war goes hot" was originally published by Computerworld.

OpenAI will use Microsoft's cloud, as Azure gains more features

OpenAI will use Microsoft's cloud, as Azure gains more features

The partnership shows momentum for AI workloads in Microsoft's cloud

Microsoft’s continued investment in artificial intelligence and machine learning technology is paying dividends. The company has partnered with OpenAI, a non-profit company founded earlier this year to advance the field of machine intelligence for the benefit of humanity. 

As part of the deal, announced Tuesday, OpenAI will use Microsoft Azure as its primary cloud provider, an important win for Microsoft as it competes with the likes of Amazon, Google, and IBM to power the next generation of intelligent applications. OpenAI is backed by the likes of Tesla CEO Elon Musk, controversial investor Peter Thiel, LinkedIn co-founder Reid Hoffman, and Y Combinator Partner Jessica Livingston. 

On top of that, Microsoft also launched a set of cloud services all aimed at furthering intelligent applications. The new Azure Bot Service makes it easier for people to spin up intelligent chat bots in Microsoft’s cloud, while Azure Functions lets customers run compute functions without provisioning servers. The company also announced the general availability of its N-series virtual machines, which give customers the ability to use GPUs for high-performance computing tasks. 

Microsoft has been working to position its cloud as the home for intelligent applications, and these announcements demonstrate further momentum toward that goal.

The N-series virtual machines made generally available Tuesday are an important part of that. They provide users with the ability to run high-performance workloads in the cloud that require the power of GPUs to handle massive parallel computing tasks. It’s a service that OpenAI has already been taking advantage of, along with other customers like Esri and Jellyfish Pictures. 

For companies that want a pre-built service, the Azure Bot Service provides users with a number of templates that they can use to get started building intelligent, conversational assistants. It’s built to easily plug into other Azure services, like Microsoft’s Language Understanding Intelligent Service (LUIS), which helps computer programs parse human language. 

One of the other key services included in Microsoft’s news dump Tuesday is Azure Functions, which lets customers set up a snippet of code that runs whenever a set of conditions is met. Microsoft handles all the provisioning of compute resources necessary to run the Functions, so users don’t have to worry about running a virtual machine all the time to handle irregular events.

Making Azure Functions generally available is important to Microsoft’s ongoing competition with Amazon Web Services. The other Seattle-area cloud provider launched its AWS Lambda service N years ago, and it has proven a popular tool for leveraging the power of what people have begun calling “serverless” computing. 

Kategori

Kategori