I’ve been thinking about getting my first full-frame DSLM camera for some time now, there are a whole lot of very tempting offerings out there. Not one however was able to tick all the boxes that are most important for me – including excellent auto-focus, great battery life, no recording limit and a price tag of around 2k. Very recently, Sony announced the Alpha 7c, Sony’s smallest full-frame camera so far. While the A7c recycles a lot of established components from earlier Sony cameras and received quite a bit of flak for that (same sensor! no 4K60! no 10bit!), it did include some minor improvements over the Alpha 7 III that might actually be a major deal for some: a fully articulating screen, eye-tracking auto focus for video and unlimited recording. On the other hand, reviewers found that the in-body image stabilization via sensor shift (IBIS) was curiously worse than that of the A7 III. 

While watching some A7c-related videos on YouTube a few days ago, I stumbled upon a very interesting video by Gordon Laing though:

He reveals that the A7c has a “hidden” feature that relates to video stabilization. I say “hidden” because Sony for whatever reason didn’t bother to mention it at all when promoting its latest camera release, totally focussing on its small form factor. So Sony’s A7c has an inbuilt gyroscope sensor that records metadata about the camera’s whereabouts in 3D space when filming, so basically every shake you make leaves a metadata trace in the file. This metadata can be used by Sony’s free desktop software Catalyst Browse to correct the shakes and stabilize the footage in post. As you can see in Gordon Laing’s video, the results are very impressive, almost gimbal-like! This was also picked up by some other YouTubers like Camera Conspiracies and Lens Library. Sure, it’s another step in post production that you have to do (and the software seems to take its time to process footage) but the prospect of not having to pack a gimbal and balance it and instead becoming even more mobile, is very promising in my opinion. 

Now how does this relate to smartphone videography? As you might know, all modern smartphones (unlike most traditional cameras) do have gyro sensors in them, the most basic thing they’re good for is to control the screen’s orientation (portrait or landscape) based on how you’re holding your phone. Why not take advantage of this in a more advanced way to record gyro metadata when capturing video? Google already has a pretty amazing and free software stabilization feature in its Android version of Photos (many still don’t know about it!) but I’m quite sure this is not (yet) based on recorded gyro metadata. While it might not be that easy for a 3rd party app like Filmic Pro to syphon the gyro metadata off the sensor it should be generally possible. And what’s more: With the smartphone being not only a camera but also a computer that runs software, the post stabilization process (it might be too much for a processor to handle this in real time while shooting!) could be done on the very same device unlike when shooting on a DSLM like the A7c. Of course this would also mean that we need some sort of a mobile Catalyst Browse app for Android and iOS but maybe pro mobile video editing apps like LumaFusion or KineMaster could make this happen in the near future? It will require powerful processors but I think I’m hardly exaggerating when I say that most modern flagship phones can be more powerful than a lot of desktop computers. I’m not a software developer so maybe I’m asking too much (at least right now) but I sure think it’s worth a thought, well actually more than just one!

What do you think? Would this be something you are interested in? How do you like the results? Let me know in the comments or hit me up on the Twitter @smartfilming. If you like this blog post, do consider signing up for my Telegram Newsletter where you will be notified about new blog posts.