Amazon sale knocks up to $270 off Roborock robot vacuums

If you want more than just a robot vacuum, Roborock’s models provide not just exceptional sucking power but mopping functions as well. Now, you can grab some of the company’s best models at steep discounts thanks to Amazon’s latest sale. Some of the be…

US$329 入手最新 B&W Px7 S2 頭戴式降噪耳機

Bowers & Wilkins(B&W)於年中推出了 Px7 S2,以精良設計及用料得到不少好評,音質和主動降噪表現亦較上代有加強,在美區 Amazon 上它終於推出了首次折扣,只需要跟隨我們的代運攻略,即可輕鬆入手。另外同品牌的 PI5 依然正在減價,有興趣的讀者可以選購。…

OpenAI releases Point-E, which is like DALL-E but for 3D modeling

OpenAI, the Elon Musk-founded artificial intelligence startup behind popular DALL-E text-to-image generator, announced on Tuesday the release of its newest picture-making machine POINT-E, which can produce 3D point clouds directly from text prompts. Whereas existing systems like Google’s DreamFusion typically require multiple hours — and GPUs — to generate their images, Point-E only needs one GPU and a minute or two.

There's a corgi in a santa hat, an
OpenAI

3D modeling is used across a variety industries and applications. The CGI effects of modern movie blockbusters, video games, VR and AR, NASA’s moon crater mapping missions, Google’s heritage site preservation projects, and Meta’s vision for the Metaverse all hinge on 3D modeling capabilities. However, creating photorealistic 3D images is still a resource and time consuming process, despite NVIDIA’s work to automate object generation and Epic Game’s RealityCapture mobile app, which allows anyone with an iOS phone to scan real-world objects as 3D images. 

Text-to-Image systems like OpenAI’s DALL-E 2 and Craiyon, DeepAI, Prisma Lab’s Lensa, or HuggingFace’s Stable Diffusion, have rapidly gained popularity, notoriety and infamy in recent years. Text-to-3D is an offshoot of that research. Point-E, unlike similar systems, “leverages a large corpus of (text, image) pairs, allowing it to follow diverse and complex prompts, while our image-to-3D model is trained on a smaller dataset of (image, 3D) pairs,” the OpenAI research team led by Alex Nichol wrote in Point·E: A System for Generating 3D Point Clouds from Complex Prompts, published last week. “To produce a 3D object from a text prompt, we first sample an image using the text-to-image model, and then sample a 3D object conditioned on the sampled image. Both of these steps can be performed in a number of seconds, and do not require expensive optimization procedures.”

Point-E
OpenAI

If you were to input a text prompt, say, “A cat eating a burrito,” Point-E will first generate a synthetic view 3D rendering of said burrito-eating cat. It will then run that generated image through a series of diffusion models to create the 3D, RGB point cloud of the initial image — first producing a coarse 1,024-point cloud model, then a finer 4,096-point. “In practice, we assume that the image contains the relevant information from the text, and do not explicitly condition the point clouds on the text,” the research team points out. 

These diffusion models were each trained on “millions” of 3d models, all converted into a standardized format. “While our method performs worse on this evaluation than state-of-the-art techniques,” the team concedes, “it produces samples in a small fraction of the time.” If you’d like to try it out for yourself, OpenAI has posted the projects open-source code on Github.

An algorithm can use WiFi signal changes to help identify breathing issues

National Institute of Standards and Technology (NIST) researchers have developed a way to monitor breathing based on tiny changes in WiFi signals. They say their BreatheSmart deep-learning algorithm could help detect if someone in the household is having breathing issues.

WiFi signals are almost ubiquitous. They bounce off of and pass through surfaces as they try to link devices with routers. But any movement will alter the signal’s path, including how the body moves as we breathe, which can change if we have any issues. For instance, your chest will move differently if you’re coughing.

Other researchers have explored the use of WiFi signals to detect people and movements, but their approaches required dedicated sensing devices and their studies provided limited data. A few years ago, a company called Origin Wireless developed an algorithm that works with a WiFi mesh network. Similarly, NIST says BreatheSmart works with routers and devices that are already available on the market. It only requires a single router and connected device.

The scientists changed the firmware on a router so that it would check “channel state information,” or CSI, more frequently. CSI refers to the signals that are sent from a device, such as a phone or laptop, to the router. CSI signals are consistent and the router understands what they should look like, but deviations in the environment, such as the signal being affected by surfaces or movement, modify the signals. The researchers got the router to request these CSI signals up to 10 times per second to gain a better sense of how the signal was being modified.

The team simulated several breathing conditions with a manikin and monitored changes in CSI signals with an off-the-shelf router and receiving device. To make sense of the data they collected, NIST research associate Susanna Mosleh developed the algorithm. In a paper, the researchers noted that BreatheSmart correctly identified the simulated breathing conditions 99.54 percent of the time.

Mosleh and Jason Coder, who heads up NIST’s research in shared spectrum metrology, hope developers will be able to use their research to create software that can remotely monitor a person’s breathing with existing hardware. “All the ways we’re gathering the data is done on software on the access point (in this case, the router), which could be done by an app on a phone,” Coder said. “This work tries to lay out how somebody can develop and test their own algorithm. This is a framework to help them get relevant information.”

Hands-on with LG’s 240Hz UltraGear gaming monitors: Setting a new bar for OLED refresh rates

Earlier this year, Alienware released what’s arguably the best all-around gaming monitor on the market right now: the AW3423DW. But last week, LG quietly announced its latest batch of UltraGear gaming monitors and after getting a chance to check them o…

The first weird gadget of CES 2023 is Lenovo’s Swiss Army lamp

CES 2023 usually features some pretty eccentric gadgets, and Lenovo is kicking off that trend with the Go Desk Station with Webcam. It’s designed for those of us with limited desk space, serving as a webcam, adjustable desk light, Qi wireless charger and expansion hub all in one. It doesn’t compromise on any of those things, but is priced accordingly.

The primary feature is the Lenovo Go 4K Pro Webcam (also available as a standalone camera) designed for video conferencing and high-res streaming. It can stream 4K at up to 30 fps and includes autofocus and auto-framing with an adjustable field of view, along with auto ambient light adjustment, via the built-in desk light.

Lenovo's desk light has an integrated webcam, wireless charger and 135W power input
Lenovo

That desk light rides on a height-adjustable and rotating arm, and can be positioned in almost any direction to illuminate your face or objects on your desk. You can choose from three color temperature options to match your environment, including 3,000K (yellow white), 4,500K (cool white) and 6,500K (daylight), with brightness up to 1600 lux at 0.5 meters (1.5 feet). 

It’s a versatile hub, as well. It has a 135-watt USB-C power input with a full-function 65-watt USB-C port for laptop power, to start with. It also includes 15-watt Qi compliant charging pad for mobile devices, a 20W USB Type-C port, two USB Type-A 3.1 ports and an HDMI 2.0 output for external displays up to 4K at 60fps.  

If you’re already looking for a desk lamp, wireless charger and USB hub, this could fit the bill in just a single purchase. You’ll pay for it though. The Go Desk Station with Webcam arrives in March 2023 starting at $329, or you can grab the Lenovo Go 4K Pro webcam by itself for $150, also in March next year. 

Lenovo’s IdeaPad Flex 3i Chromebook offers a larger display and optional 1080p webcam

Lenovo has launched the IdeaPad Flex 3i 2-in-1 Chromebook with improved features over last year’s Flex 3i Chromebook, along with a higher price tag. The 16:10 12.2-inch display is an inch larger than before, and it can be used as a laptop, tablet or ma…

Lenovo updates its IdeaPad Pro and Slim laptops with the latest Intel and AMD chips

We’re not that far away from CES, where we should expect new chip announcements from Intel and AMD. That’s normally followed by a raft of Windows 11 laptop announcements that use the new silicon, but Lenovo has decided to get its news out of the door w…

TikTok will explain why it recommends videos on its ‘For You’ page

The algorithm that powers TikTok’s “For You” page has long been a source of fascination and suspicion. Fans often remark on the app’s eerie accuracy, while TikTok critics have at times speculated the company could subtly manipulate its algorithm to influence its users in more nefarious ways.

Now, the company is taking new steps to demystify some aspects of its algorithm. The app is introducing a feature that will “help people understand why a particular video has been recommended to them.” With the update, users will be able to tap on a new question mark icon, which will list some factors that played a role in the recommendation.

In a blog post, the company notes that its “recommendation system is powered by technical models” and the feature is meant to make “technical details more easily understandable.” For now, that also means the details shared sound a bit vague. For example, “this video is popular in the United States,” and “you are following Hanna” are two of the explanations provided by Tiktok. Other explanations may be based on “user interactions, such as content you watch, like or share, comments you post, or searches.”

The company says it plans to add “more granularity and transparency” to the feature over time, though, so the explanations could eventually get more detailed. A TikTok spokesperson said that future versions may also incorporate other factors that influence the app’s algorithm, like an individual’s account settings.

While the feature will likely not do much to assuage critics who think TikTok, or parent company ByteDance, uses the algorithm to manipulate users, it could help make its recommendations a bit more understandable to its users. And the change is part of a broader move from TikTok to prove it’s willing to be more transparent about the inner workings of its app. The company has also partnered with Oracle to conduct a review of its algorithms and content moderation system.