Intel Shuts Down New Devices Group: No More Intel-Made Wearables
by Anton Shilov on April 20, 2018 11:00 AM EST- Posted in
- Wearables
- Peripherals
- Intel
- Atom
- Basis
Intel this week confirmed that it had decided to close down its New Devices Group, which developed various wearable electronics, such as smartwatches, health/fitness monitors, smart/AR glasses and so on. The group was created five years ago by then-incoming CEO Bryan Krzanich, who wanted to ensure that Intel’s chips would be inside millions of emerging devices. While wearables have become relatively popular, their propagation is far below that of smartphones. Meanwhile, wearables made by Intel have never been among the market's bestsellers. Thus, the chip giant is pulling the plug.
Over the five-year history of NDG, Intel made two significant acquisitions to bring necessary expertise to the group: the company took over Basis (a maker of fitness watches) in 2014 and Recon (a maker of wearable heads-up displays) in 2015. Most recently, Intel’s NDG showcased their Vaunt (aka Superlight) smart glasses that looked like “normal” glasses, yet used laser beams to project information to retina justifying their “smart” moniker. While NDG had cutting edge technologies, the group has never managed to produce a truly popular product. Moreover, when problems with one of their Basis smart watches showed up on a limited number of devices, Intel preferred to stop their sales and refund their costs to the customers rather than fix the problems and replace faulty units.
In the second half of 2015, Intel folded the New Devices Group into the New Technology Group, which was a signal that the company was hardly satisfied with NGD’s performance. Since then, we have seen multiple reports about layoffs in Intel’s NGD and have hear multiple rumors to axe the unit. Because making actual devices is generally unnatural for Intel, it was a matter of time brefore the chip giant was to pull the plug, so apparently it decided to do so this month.
Since Intel’s New Technology Group remains in place, all of Intel’s ongoing research projects for smart devices remain intact. More importantly, other Intel’s divisions continue to work on their products for wearables and ultra-low-power devices that will become widespread in the looming 5G era. The only products that are not going to see the light of day are those designed by Intel’s New Devices Group (e.g., the Vaunt glasses). Considering the fact that neither of NDG’s products has become popular, it is unclear whether those products are going to be missed.
It is noteworthy that Intel canned their Galileo, Joule, and Edison product lines aimed at the Internet-of-Things last Summer.
Related Reading:
Source: CNBC
55 Comments
View All Comments
patrickjp93 - Sunday, April 22, 2018 - link
Yeah those GB scores are useless comparing between ISAs. X86 is a second class citizen to that organisation. It doesn't even use AVX code yet uses the equivalent ARM instructions where possible.Wilco1 - Sunday, April 22, 2018 - link
That's completely false. GB is not only developed on x86, it supported vectorized versions for x86 first, and there are multiple vectorized implementations for SSE and AVX.Hifihedgehog - Sunday, April 22, 2018 - link
Geekbench does not test sustained performance well at all. 3DMark's physics test and TabletMark are much better indicators of sustained performance.FunBunny2 - Friday, April 20, 2018 - link
baloney. the issue is simple: Intel built a 1960s era chip in the 1970s; the 8086 was just an incremental build from the 8080. why that matters isn't obvious. here's why CISC even existed: until Burroughs and its ALGOL driven machines, computers were programmed in assembler. for such machines, CISC is necessary. writing user level applications in a RISC assembler was always a non-starter. with the rise of C (in particular) and other 2nd gen languages, only compiler writers care about the ISA. we don't care how horrible life is for compiler writers. once that became true, RISC (at the ISA level) was a no-brainer. enter Acorn, the progenitor of ARM.the fight between CISC and RISC was lost by Intel, et al, decades ago. if CISC really were superior, Intel, et al, would use those billions and billions of transistors that have been available for decades to implement the X86 ISA in silicon. they blatantly didn't do that. they blatantly built a RISC hardware behind a CISC "decoder". yes, Itanium died, but it lives on Inside Intel.
HStewart - Friday, April 20, 2018 - link
The funny thing about this is this was AMD stating this - it would be ok.The fight between CISC and RISC is not been lost - it just different - ARM has it purpose and so does x86 code
I think we should just agree to disagree - we come from two different - go on believe that ARM processor can compete with high end Xeon processor - I like to see it do stuff like real 3d graphic creation like in Lightwave 3D, 3DMax and AutoCAD an Solid Works.
If so you are correct, other wise I will laugh and you should be called "Funny Bunny"
Wilco1 - Saturday, April 21, 2018 - link
Arm certainly beats Xeon on image processing, both single and multithreaded and is more than 6 times more power efficient: https://blog.cloudflare.com/neon-is-the-new-black/With results like these it's safe to say x86 will be losing a lot of the server market to Arm just like they lost mobile.
patrickjp93 - Sunday, April 22, 2018 - link
Yeah that bench is bogus. It uses serial x86 code, not vectorised. Bring in AVX/2 and it flips about 40% in Intel's favor. See GIMP benchmarks.Wilco1 - Sunday, April 22, 2018 - link
No the x86 version is vectorized in the same way as is clearly shown. It was even explained why using AVX2 actually slows things down.mode_13h - Saturday, April 21, 2018 - link
So C enabled RISC? I never head that one. I think you're off by at least a decade.I think HDL and VLSI enabled pipelined and superscalar CPUs, which were easier to optimize for RISC and could offset its primary disadvantage of a more vebose machine code.
Also, IA64 DOES NOT live inside of modern Intel CPUs. That's almost troll-worthy nonsense.
FunBunny2 - Saturday, April 21, 2018 - link
"So C enabled RISC?"indirectly. C made it practical to have a RISC ISA with a 2 GL coder space. C proved that a 2 GL could reach down to assembler semantics in a processor independent way. if you work at the pure syntax level. at the same time, the notion of a standard library meant that one could construct higher level semantics grafted on; again in a processor independent way. there's a reason C was/is referred to as the universal assembler. virtually every cpu in existence
the result was/is that CISC on the hardware wasn't necessary. the libraries took care of that stuff (C as in complex) when necessary. compiler writers, on the other hand, benefited from a RISC semantics at the assembler level, since there's much greater similarities among various RISC ISAs than CISC. by now, of course, we're down to X86, Z, and ARM. and, I reiterate, if CISC were inherently superior, Intel (and everybody else) would have used those billions and billions of transistors to implement their ISAs in hardware. they didn't. they built RISC on the hardware. calling it "micro-code" is obfuscation. it's a RISC machine. the last X86 chip that wasn't RISC behind a CISC "decoder" was Pentium era.
if one looks at cpu die shots over the years, the core continues to be a shrinking percent of the real estate.
"Also, IA64 DOES NOT live inside of modern Intel CPUs."
do you really think that Intel could have built that RISC/X86 machine if they hadn't gone through the Itanium learning curve?