pixels and pictures
18.2K views | +0 today
pixels and pictures
Exploring the digital imaging chain from sensors to brains
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

Image Processing Comparison between Smartphones

Philippe J DEWOST's insight:

Remove what suits you.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

The mastermind of Google’s Pixel camera, Marc Levoy, quietly left the company in March

The mastermind of Google’s Pixel camera, Marc Levoy, quietly left the company in March | pixels and pictures | Scoop.it

Two key Pixel execs, including the computer researcher who led the team that developed the computational photography powering the Pixel’s camera, have left Google in recent months, according to a new report from The Information. The executives who left are distinguished engineer Marc Levoy and former Pixel general manager Mario Queiroz.

 

Queiroz had apparently already moved off the Pixel team two months before the launch of the Pixel 4 into a role that reported directly to Google CEO Sundar Pichai. However, he left in January to join Palo Alto Networks, according to The Information and his LinkedIn. Levoy left Google in March, which is also reflected on his LinkedIn.

Philippe J DEWOST's insight:

Optical Destabilization is underway at Google. I was lucky to meet Marc Levoy 10 yrs ago while I was running #imsense #eye-fidelity, impressive engineer, he will be a great loss.

Philippe J DEWOST's curator insight, May 14, 2020 4:53 AM

Optical Destabilization is underway at Google

Scooped by Philippe J DEWOST
Scoop.it!

DeepMind’s AI can ‘imagine’ a world based on a single picture

DeepMind’s AI can ‘imagine’ a world based on a single picture | pixels and pictures | Scoop.it

Artificial intelligence can now put itself in someone else’s shoes. DeepMind has developed a neural network that taught itself to ‘imagine’ a scene from different viewpoints, based on just a single image.

Given a 2D picture of a scene – say, a room with a brick wall, and a brightly coloured sphere and cube on the floor – the neural network can generate a 3D view from a different vantage point, rendering the opposite sides of the objects and altering where shadows fall to maintain the same light source.

The system, called the Generative Query Network (GQN), can tease out details from the static images to guess at spatial relationships, including the camera’s position.

“Imagine you’re looking at Mt. Everest, and you move a metre – the mountain doesn’t change size, which tells you something about its distance from you,”says Ali Eslami who led the project at Deepmind.

“But if you look at a mug, it would change position. That’s similar to how this works,”

To train the neural network, he and his team showed it images of a scene from different viewpoints, which it used to predict what something would look like from behind or off to the side. The system also taught itself through context about textures, colours, and lighting. This is in contrast to the current technique of supervised learning, in which the details of a scene are manually labeled and fed to the AI.

Philippe J DEWOST's insight:

DeepMind now creates depth in images

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Google's first mobile chip is an image processor hidden in the Pixel 2

Google's first mobile chip is an image processor hidden in the Pixel 2 | pixels and pictures | Scoop.it

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products. You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case? Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market. Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.

The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.” It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready. In that way, it’s a rather delightful bonus for new Pixel buyers. The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Philippe J DEWOST's insight:

Google"s Pixel Visual Core and its 8 Image Processing Units unveil a counterintuitive hardware approach to High Dynamic Range processing until you understand the design principles of their HDR approach. #HardwareIsNotDead

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

This amazing Giroptic IO HD 360 degree camera has only 2 lenses yet delivers immersive photo, video and live streams

This amazing Giroptic IO HD 360 degree camera has only 2 lenses yet delivers immersive photo, video and live streams | pixels and pictures | Scoop.it

Giroptic, creator of the standalone ‘360cam’, today announced the launch of the iO 360 camera which attaches to any Lightning-enabled iPhone or iPad.

 

The Giroptic iO 360 camera for iPhone/iPad is available starting today for $250. With two counter-facing lenses, the camera enables Apple devices to capture full 360 degree photo and videospheres. The camera currently supports 360 degree livestreaming via YouTube, and support for 360 Facebook Live is planned

Specs include two 195 degree lenses with an aperture of F/1.8, onboard stereo microphone and a rechargeable battery. Video is captured at 1920×960 resolution at 30 FPS and is stitched in real time with no post-processing needed. Photos are shot at a higher 3840×1920 resolution.

Philippe J DEWOST's insight:

Richard Ollier's team strikes again ! Can't wait to test one...

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Google to buy Viewdle for $45 million

Google to buy Viewdle for $45 million | pixels and pictures | Scoop.it

Visual analysis company Viewdle may be the latest acquisition of Google; the search leader is reportedly paying about $45 million for the Ukraine-based imaging firm.


Motorola, the handset maker acquired this year by Google, was reportedly interested in Viewdle last year.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Project Glass by Google: dream or (augmented) reality ?

"We believe technology should work for you — to be there when you need it and get out of your way when you don't."

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

After five years of offering unlimited free photo backups at “high quality,” Google Photos will start charging for storage once more than 15 gigs on the account have been used

After five years of offering unlimited free photo backups at “high quality,” Google Photos will start charging for storage once more than 15 gigs on the account have been used | pixels and pictures | Scoop.it

After five years of offering unlimited free photo backups at “high quality,” Google Photos will start charging for storage once more than 15 gigs on the account have been used. The change will happen on June 1st, 2021, and it comes with other Google Drive policy changes like counting Google Workspace documents and spreadsheets against the same cap. Google is also introducing a new policy of deleting data from inactive accounts that haven’t been logged in to for at least two years.

 

All photos and documents uploaded before June 1st will not count against that 15GB cap, so you have plenty of time to decide whether to continue using Google Photos or switching to another cloud storage provider for your photos. Only photos uploaded after June 1st will begin counting against the cap.

 

Google already counts “original quality” photo uploads against a storage cap in Google Photos. However, taking away unlimited backup for “high quality” photos and video (which are automatically compressed for more efficient storage) also takes away one of the service’s biggest selling points. It was the photo service where you just didn’t have to worry about how much storage you had.

Philippe J DEWOST's insight:

4 selfies per human per week #4selfies : today more than 4 trillion photos are stored in Google Photos, and every week 28 billion new photos and videos are uploaded...

 

Philippe J DEWOST's curator insight, December 14, 2020 2:21 AM

Chaque semaine, 28 milliards de photos et vidéos sont ajoutées à Google Photos. Soit l'équivalent de 4 selfies par habitant de notre planète.

 

Voici ce que révèle la notification envoyée par Google à l'ensemble des utilisateurs de son service d'hébergement Google Drive.

 

On y relève également toutes les possibilités proposées à l'utilisateur pour récupérer de la place, comme la détection des photos floues, ou encore des photos de slides. L'idée, élégante de prime abord, repose sur des capacités de traitement et d'analyse d'images tout à fait intéressantes. Pourquoi ne pas imaginer de reconstituer des éléments de Google Slides par exemple ?

 

Un dernier élément — de portée symbolique — est la dénomination de "politique" qui dit quelque chose en creux de la représentation de puissance de Google & co.

Scooped by Philippe J DEWOST
Scoop.it!

Google Photos adds a chat feature to its app

Google Photos adds a chat feature to its app | pixels and pictures | Scoop.it

An argument could be made that Google has over-indulged in its creation of way too many messaging apps in years past. But today’s launch of a new messaging service — this time within the confines of Google Photos — is an integration that actually makes sense.

The company is rolling out a way to directly message photos and chat with another user or users within the Google Photos app. The addition will allow users to quickly and easily share those one-off photos or videos with another person, instead of taking additional steps to build a shared album.

The feature itself is simple to use. After selecting a photo and tapping share, you can now choose a new option, “Send in Google Photos.” You can then tap on the icon of your most frequent contacts or search for a user by name, phone number or email.

The recipient will need a Google account to receive the photos, however, because they’ll need to sign in to view the conversation. That may limit the feature to some extent, as not everyone is a Google user. But with now a billion-some Google Photos users out there, it’s likely that more of the people you want to share with will have an account, rather than not.

You also can use this feature to start a group chat by selecting “New group,” then adding recipients.

Once a chat has been started, you can return to it at any time from the “Sharing” tab in Google Photos. Here, you’ll be able to see the photos and videos you each shared, comments, text chats and likes. You also can save the photos you want to your phone or tap on the “All Photos” option to see just the photos themselves without the conversations surrounding them.

Philippe J DEWOST's insight:

Even if a picture is worth a thousand words, adding a few more may trigger the conversation.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Google may be buying Lytro's assets for about $40M

Google may be buying Lytro's assets for about $40M | pixels and pictures | Scoop.it

Multiple sources tell us that Google is acquiring Lytro, the imaging startup that began as a ground-breaking camera company for consumers before pivoting to use its depth-data, light-field technology in VR.

Emails to several investors in Lytro have received either no response, or no comment. Multiple emails to Google and Lytro also have had no response.

But we have heard from several others connected either to the deal or the companies.

One source described the deal as an “asset sale” with Lytro going for no more than $40 million. Another source said the price was even lower: $25 million and that it was shopped around — to Facebook, according to one source; and possibly to Apple, according to another. A separate person told us that not all employees are coming over with the company’s technology: some have already received severance and parted ways with the company, and others have simply left.

Assets would presumably also include Lytro’s 59 patents related to light-field and other digital imaging technology.

The sale would be far from a big win for Lytro and its backers. The startup has raised just over $200 million in funding and was valued at around $360 million after its last round in 2017, according to data from PitchBook. Its long list of investors include Andreessen Horowitz, Foxconn, GSV, Greylock, NEA, Qualcomm Ventures and many more. Rick Osterloh, SVP of hardware at Google, sits on Lytro’s board.

A pricetag of $40 million is not quite the exit that was envisioned for the company when it first launched its camera concept, and in the words of investor Ben Horowitz, “blew my brains to bits.”

Philippe J DEWOST's insight:

Approx $ 680k per patent : this is the end of 12 years old Lytro's story. After $200M funding and several pivots, assets and IP are rumored to join Google.

Remember some key steps  from Light Field Camera (http://sco.lt/8Ga7fN) to DSLR (http://sco.lt/9GGCEz) to 360° video tools (http://sco.lt/5tecr3)

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Google RAISR uses machine learning for smarter upsampling

Upsampling techniques to create larger versions of low-resolution images have been around for a long time – at least as long as TV detectives have been asking computers to 'enhance' images. Common linear methods fill in new pixels using simple and fixed combinations of nearby existing pixel values, but fail to increase image detail. The engineers at Google's research lab have now created a new way of upsampling images that achieves noticeably better results than the previously existing methods.

RAISR (Rapid and Accurate Image Super-Resolution) uses machine learning to train an algorithm using pairs of images, one low-resolution, the other with a high pixel count. RAISR creates filters that can recreate image detail that is comparable to the original, when applied to each pixel of a low-resolution image. Filters are trained according to edge features that are found in specific small areas of images, including edge direction, edge strength and how directional the egde is. The training process with a database of 10000 image pairs takes approximately an hour. 

Philippe J DEWOST's insight:

Google introduces RAISR : when AI meets pixels to reduce bandwidth while improving visual perception.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Ostagram

Ostagram | pixels and pictures | Scoop.it

These user created images are the product of an art technique known as Inceptionism, using neural networks to generate a single mind-bending picture from two source images.

 

The images are possible thanks to DeepDream software, which finds and enhances patterns in images by a process known as algorithmic pareidolia. It was pioneered by Google and was originally code-named Inception after the film of the same name. 

Philippe J DEWOST's insight:

I didn't know anything about Inceptionism and Pareidolia until I bumped into these incredible images on Ostagram...

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Rivals Apple, Google, Samsung, may ally to acquire Kodak patents –

Rivals Apple, Google, Samsung, may ally to acquire Kodak patents – | pixels and pictures | Scoop.it

But the bid is lower than the > $2 billion Kodak desires ...

No comment yet.