There are times when – for reasons of privacy or even a person’s physical safety – you want to make certain parts of a frame in a video unrecognizable so not to give away someone’s identity or the place where you shot the video. While it’s fairly easy to achieve something like that for a photograph, it’s a lot more challenging for video because of two reasons: 1) You might have a person moving around within a shot or a moving camera which constantly alters the location of the subject within the frame. 2) If the person talks, he or she might also be identifiable just by his/her voice. So are there any apps that help you to anonymize persons or objects in videos when working on a smartphone?
KineMaster – the best so far
Up until recently the best app for anonymizing persons and/or certain parts of a video in general was KineMaster which I already praised in my last blog about the best video editing apps on Android (it’s also available for iPhone/iPad). While it’s possible to use just any video editor that allows for a resizable image layer (let’s say just a plain black square or rectangle) on top of the main track to cover a face, KineMaster is the only one with a dedicated blur/mosaic tool for this use case. Many other video editing apps have a blur effect in their repertoire, but the problem is that this effect always affects the whole image and can’t be applied to only a part of the frame. KineMaster on the other hand allows its Gaussian Blur effect to be adjusted in size and position within the frame. To access this feature, scroll to the part of the timeline where you want to apply the effect but don’t select any of the clips! Now tap on the “Layer” button, choose “Effect”, then “Basic Effects”, then either “Gaussian Blur” or “Mosaic”. An effect layer gets added to the timeline which you can resize and position within the preview window. Even better: KineMaster also lets you keyframe this layer which is incredibly important if the subject/object you want to anonymize is moving around the frame or if the camera is moving (thereby constantly altering the subject’s/object’s position within the frame). Keyframing means you can set “waypoints” for the effect’s area to automatically change its position/size over time. You can access the keyframing feature by tapping on the key icon in the left sidebar. Keyframes have to be set manually so it’s a bit of work, particularly if your subject/object is moving a lot. If you just have a static shot with the person not moving around a lot, you don’t have to bother with keyframing though. And as if the adjustable blur/mosaic effect and support for keyframing wasn’t good enough, KineMaster also gives you a tool to add an extra layer of privacy: you can alter voices. To access this feature, select a clip in the timeline and then scroll down the menu on the right to find “Voice Changer”, there’s a whole bunch of different effects. To be honest, most of them are rather cartoonish – I’m not sure you want your interviewee to sound like a chipmunk. But there are also a couple of voice changer effects that I think can be used in a professional context.
What happened to Censr?
As I indicated in the paragraph above, a moving subject (or a moving camera) makes anonymizing content within a video a lot harder. You can manually keyframe the blurred area to follow along in KineMaster but it would be much easier if that could be done via automatic tracking. Last summer, a closed beta version of an app called “Censr” was released on iOS, the app was able to automatically track and blur faces. It all looked quite promising (I saw some examples on Twitter) but the developer Sam Loeschen told me that “unfortunately, development on censr has for the most part stopped”.
PutMask – a new app with a killer feature!
But you know what? There actually is a smartphone app out there that can automatically track and pixelate faces in a video: it’s called PutMask and currently only available for Android (there are plans for an iOS version). The app (released in July 2020) offers three ways of pixelating faces in videos: automatically by face-tracking, manually by following the subject with your finger on the touch-screen and manually by keyframing. The keyframing option is the most cumbersome one but might be necessary when the other two ways won’t work well. The “swipe follow” option is the middle-ground, not as time-consuming as keyframing but manual action is still required. The most convenient approach is of course automatic face-tracking (you can even track multiple faces at the same time!) – and I have to say that in my tests, it worked surprisingly well!
Does it always work? No, there are definitely situations in which the feature struggles. If you are walking around and your face gets covered by something else (for instance because you are passing another person or an object like a tree) even for only a short moment, the tracking often loses you. It even lost me when I was walking around indoors and the lens flare from the light bulb at the ceiling created a visual “barrier” which I passed at some point. And although I would say that the app is generally well-designed, some of the workflow steps and the nomenclature can be a bit confusing. Here’s an example: After choosing a video from your gallery, you can tap on “Detect Faces” to start a scanning process. The app will tell you how many faces it has found and will display a numbered square around the face. If you now tap on “Start Tracking”, the app tells you “At least select One filter”. But I couldn’t find a button or something indicating a “filter”. After some confusion I discovered that you need to tap once on the square that is placed over the face in the image, maybe by “filter” they actually mean you need to select at least one face? Now you can initiate the tracking. After the process is finished you can preview the tracking that the app has done (and also dig deeper into the options to alter the amount of pixelation etc.) but for checking the actual pixelated video you have to export your project first. While the navigation could/should be improved for certain actions to make it more clear and intuitive, I was quite happy with the results in general. The biggest catch currently is that the maximum export resolution is 720p. I wouldn’t say it’s a deal breaker per se as 720p is still an acceptable resolution for certain use cases, particularly if you publish something to the web/social media. But 1080p would definitely be better. I talked to the developer and he/she said that they will try to add 1080p export in one of the next updates. I suppose the face-tracking feature is a computing hog and sifting through 1080p footage might (still) be somewhat challenging to not-so-powerful Android devices. But I’m sure current flagship processors like the Snapdragon 855/865 or Samsung’s recent Exynos SoC’s should be able to handle tracking in FHD. An additional feature that would be great to have in an app that has a dedicated focus on privacy and anonymization, is the ability to alter/distort the voice of a person, like you can do in KineMaster.
There’s one last thing I should address: The app is free to download with all its core functionality but you only get SD resolution and a watermark on export. For 720p watermark-free export, you need to make an in-app purchase. The IAP procedure is without a doubt the weirdest I have ever encountered: The app tells you to purchase any one of a selection of different “characters” to receive the additional benefits. Initially, these “characters” are just names in boxes, “Simple Man”, “Happy Man”, “Metal-Head” etc. If you tap on a box, an animated character pops up. But only when scrolling down it becomes clear that these “characters” represent different amounts of payment with which you support the developer. And if that wasn’t strange enough by itself, the amount you can donate goes up to a staggering 349.99 USD (Character Dr. Plague) – no kidding! At first, I had actually selected Dr. Plague because I thought it was the coolest looking character of the bunch. Only when trying to go through with the IAP did I become aware of the fact that I was about to drop 350 bucks on the app! Seriously, this is nuts! I told the developer that I don’t think this is a good idea. Anyway, the amount of money you donate doesn’t affect your additional benefits, so you can just opt for the first character, the “Simple Man”, which costs you 4.69€. I’m not sure why they would want to make things so confusing for users willing to pay but other than that, PutMask is a great new app with a lot of potential, I will definitely keep an eye on it!
As always, if you have questions or comments, drop them below or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter about important things that happened in the world of mobile video.
Ever since I started this blog, I wanted to write an article about my favorite video editing apps on Android but I could never decide on how to go about it, whether to write a separate in-depth article on each of them, a really long one on all of them or a more condensed one without too much detail or workflow explanations, more of an overview. So I recently figured there’s been enough pondering on this subject and I should just start writing something. The very basic common ground for all these mobile video editing apps mentioned here is that they allow you to combine multiple video clips into a timeline and arrange them in a desired order. Some might question the validity of editing video on such a relatively small screen as that of a smartphone (even though screen sizes have increased drastically over the last years). While it’s true that there definitely are limitations and I probably wouldn’t consider editing a feature-length movie that way, there’s also an undeniable fascination about the fact that it’s actually doable and can also be a lot of fun. I would even dare to say that it’s a charming throwback to the days before digital non-linear editing when the process of cutting and splicing actual film strips had a very tactile nature to it. But let’s get started…
When I got my first smartphone in 2013 and started looking for video editing apps in the Google PlayStore, I ran into a lot of frustration. There was a plethora of video editing apps but almost none of them could do more than manipulate a single clip. Then, in late December, an app called KineMaster was released and just by looking at the screenshots of the UI I could tell that this was the game changer I had been waiting for, a mobile video editing app that actually aspired to give you the proper feature set of a (basic) desktop video editing software. Unlike some other (failed) attempts in that respect, the devs behind KineMaster realized that giving the user more advanced editing tools could become an unpleasant boomerang flying in their face if the controls weren’t touch-friendly on a small screen. If you ever had the questionable pleasure of using a video editing app called “Clesh” on Android (it’s long gone), you know what I’m talking about. To this date, I still think that KineMaster has one of the most beautiful and intuitive UIs of any mobile app. It really speaks to its ingenuity that despite the fact that the app has grown into a respectable mobile video editing power house with many pro features, even total editing novices usually have no problem getting the hang of the basics within a couple of hours or even minutes.
While spearheading the mobile video editing revolution on Android, KineMaster dared to become one of the first major apps to drop the one-off payment method and pioneer a subscription model. I had initially paid 2€ one-off for the pro version of the app to get rid of the watermark, now you had to pay 2 or 3€ a month (!). I know, “devs gotta eat”, and I’m all for paying a decent amount for good apps but this was quite a shock I have to admit. It needs to be pointed out that KineMaster is actually free to download with all its features (so you can test it fully and with no time limit before investing any money) – but you always get a KineMaster watermark in your exported video and the export resolution doesn’t include UHD/4K. If you are just doing home movies for your family, that might be fine but if you do stuff in a professional or even just more ambitious environment, you probably want to get rid of the watermark. Years later, with every other app having jumped on the subscription bandwagon, I do feel that KineMaster is still one of the apps that are really worth it. I already praised the UI/UX, so here are some of the important features: You get multiple video tracks (resolution and number are device-dependend) and other media layers (including support for png images with tranparency), options for multiple frame rates including PAL (25/50), the ability to select between a wide variety of popular aspect ratios for projects (16:9, 9:16, 1:1, 2.35:1 etc.) and even duplicate the project with a different aspect ratio later (very useful if you want to share a video on multiple platforms), you can use keyframes to animate content, have a very good title tool at hand, audio ducking, voice over recording, basic grading tools and last but not least: the Asset Store. That’s the place where you can download all kinds of helpful assets for your edit: music, fonts, transitions, effects and most of all (animated) graphics (‘stickers’) that you can easily integrate into your project and make it pop without having to spend much time on creating stuff from scratch. Depending on what you are doing, this can be a massive help! I also have to say that despite Android’s fragmentation with all its different phones and chipsets, KineMaster works astonishingly well across the board.
There are still things that could be improved (certain parts of the timeline editing process, media management, precise font sizes, audio waveforms for video clips, quick audio fades, project archives etc.) and development progress in the last one or two years seems to have slowed down but it remains a/the top contender for the Android video editing crown, although way more challenged than in the past. Last note: KineMaster has recently released beta versions of two “helper” apps: VideoStabilizer for KineMaster and SpeedRamp for KineMaster. I personally wish they would have integrated this functionality into the main app but it’s definitely better than not having it at all.
The first proper rival for KineMaster emerged about half a year later in June 2014 with Cyberlink’s PowerDirector. Unlike KineMaster, PowerDirector was already an established name in the video editing world, at least on the consumer/prosumer level. In many ways, PowerDirector has a somewhat (yet not completely) equal feature set to that of KineMaster with one key missing option being that for exporting in PAL frame rates (if you don’t need to export in 25/50fps, you can ignore this shortcoming). The UI is also good and pretty easy to learn. After KineMaster switched to the subscription model, PowerDirector did have one big factor in its favor: You could still get the full, watermark-free version of the app by making a single, quite reasonable payment, I think it was about 5€. That, however, changed eventually and PowerDirector joined the ranks of apps that you couldn’t own anymore, but only rent via a subscription to have access to all features and watermark-free export. Despite the fact that it’s slightly more expensive than KineMaster now, it’s still a viable and potent mobile video editor with some tricks up its sleeve.
It’s for instance the only mobile video editor that has an integrated stabilization tool to tackle shaky footage. It’s also the only one with a dedicated de-noise feature for audio and unlike with KineMaster you can mix your audio levels by track in addition to just by individual clips. Furthermore, PowerDirector offers the ability to transfer projects from mobile to its desktop version via the Cyberlink Cloud which can come in handy if you want to assemble a rough cut on the phone but do more in-depth work on a bigger screen with mouse control. Something rather annoying is the way in which the app tries to nudge or dare I say shove you towards a subscription. As I had bought the app before the introduction of the subscription model, I can still use all of its features and export without a watermark but before getting to the edit workspace, the app bombards you with full-screen ads for its subscription service every single time – I really hate that. One last thing: There are a couple of special Android devices on which PowerDirector takes mobile video editing actually to another level but that’s for a future article so stay tuned.
Adobe Premiere Rush
Even more so than Cyberlink, Adobe is a well-known name in the video editing business thanks to Premiere Pro (Windows/macOS). More than once I had asked myself why such a big player had missed the opportunity to get into the mobile editing game. Sure, they dipped their toes into the waters with Premiere Clip but after a mildly promising launch, the app’s development stagnated all too soon and was abandoned eventually – not that much of a loss as it was pretty basic. In 2018 however, Adobe bounced back onto the scene with a completely new app, Premiere Rush. This time, it looked like the video editing giant was ready to take the mobile platform seriously.
The app has a very solid set of advanced editing features and even some specialties that are quite unique/rare in the mobile editing environment: You can for instance expand the audio of a video clip without actually detaching it and risking to go out of sync, very useful for J & L cuts. There’s also a dedicated button that activates multi-select for clips in the timeline, another great feature. What’s more, Rush has true timeline tracks for video. What do I mean by “true”? KineMaster and PowerDirector support video layers but you can’t just move a clip from the primary track to an upper/lower layer track and vice versa which isn’t that much of a problem most of the time but sometimes it can be a nuisance. In Rush you can move your video clips up and down the tracks effortlessly. The “true tracks” also means that you can easily disable/mute/lock a particular track and all the clips that are part of it. One of Rush’s marketed highlights is the auto-conform feature which is supposed to automatically adapt your edit to other aspect ratios using AI to frame the image in the (hopefully) best way. So for instance if you have a classic 16:9 edit, you can use this to get a 1:1 video for Instagram. This feature is reserved for premium subscribers but you can still manually alter the aspect ratio of your project in the free version. For a couple of months, the app was only available for iOS but premiered (pardon the pun!) on Android in May 2019. Like PowerDirector, you can use Adobe’s cloud to transfer project files to the desktop version of Rush (or even import into Premiere Pro) which is useful if the work is a bit more complex. It’s also possible to have projects automatically sync to the cloud (subscriber feature). Initially, the app had a very expensive subscription of around 10€ per month (and only three free exports to test) unless you were already an Adobe Creative Cloud subscriber in which case you got it for free), but it has now become more affordable (4.89€ monthly or 33.99 per year) and the basic version with most features including 1080p export (UHD/4K is a premium feature) is free and doesn’t even force a watermark on your footage – you do need to create a (free) account with Adobe though.
The app does have its quirks – how much of it are still teething aches, I’m not sure. In my personal tests with a Google Pixel 3 and a Pocophone F1, export times were sometimes outrageously long, even for short 1080p projects. Both my test devices were powered by a Snapdragon 845 SoC which is a bit older but was a top flagship processor not too long ago and should easily handle 1080p video. Other editing apps didn’t have any problems rushing out (there goes another pun!) the same project on the same devices. This leads me to believe that the app’s export engine still needs some fine tuning and optimization. But maybe things are looking better on newer and even more powerful devices. Another head-scratcher was frame rate fidelity. While the export window gave me a “1080p Match Framerate” option as an alternative to “1080p 30fps”, surely indicating that it would keep the frame rate of the used clips, working with 25fps footage regularly resulted in a 30fps export. The biggest caveat with Rush though is that its availability on Android is VERY limited. If you have a recent flagship phone from Samsung, Google, Sony or OnePlus, you’re invited, otherwise you are out of luck – for the moment at least. For a complete list of currently supported Android devices check here.
Ever since I started checking the Google PlayStore for interesting new apps on a regular basis, it rarely happens that I find a brilliant one that’s already been out for a very long time. It does happen on very rare occasions however and VN is the perfect case in point. VN had already been available for Android for almost two years (the PlayStore lists May 2018 as the release date) when it eventually popped up on my radar in March 2020 while doing a routine search for “video editors” on the PlayStore. VN is a very powerful video editor with a robust set of advanced tools and a UI that is both clean, intuitive and easy to grasp. You get a multi-layer timeline, support for different aspect ratios including 16:9, 9:16, 1:1, 21:9, voice over recording, transparency with png graphics, keyframing for graphical objects (not audio though, but there’s the option for a quick fade in/out), basic exposure/color correction, a solid title tool, export options for resolutions up to UHD/4K, frame rate (including PAL frame rates) and bitrate.
In other news, VN is currently the only of the advanced mobile video editing apps with a dedicated and very easy-to-use speed-ramping tool which can be helpful when manipulating a clip in terms of playback speed. It’s also great that you can move video clips up and down the tracks although it’s not as intuitive as Adobe Premiere Rush in that respect since you can’t just drag & drop but have to use the “Forward/Backward” button. But once you know how to do it, it’s very easy. While other apps might have a feature or two more, VN has a massive advantage: It’s completely free, no one-off payment, no subscription, no watermark. You do have to watch a 5 second full-screen ad when launching the app and delete a “Directed by” bumper clip from every project’s timeline, but it’s really not much of a bother in my opinion. In the past you had to create an account with VN but it’s not a requirement anymore. Will it stay free? When I talked to VN on Twitter some time ago, they told me that the app as such is supposed to remain free of charge but that they might at some point introduce certain premium features or content. VN recently launched a desktop version for macOS (no Windows yet) and the ability to transfer project files between iOS and macOS. While this is currently only possible within the Apple ecosystem (and does require that you register an account with VN), more cross-platform integration could be on the horizon. All in all, VN is an absolutely awesome and easily accessible mobile video editor widely available for most Android devices (Android 5.0 & up) – but do keep in mind that depending on the power of your phone’s chipset, the number of video layers and the supported editing/exporting resolution can vary.
Special mention (Motion Graphics): Alight Motion
Alight Motion is a pretty unique mobile app that doesn’t really have an equivalent at the moment. While you can also use it to stitch together a bunch of regular video clips filmed with your phone, this is not its main focus. The app is totally centered around creating advanced, multi-layered motion graphics projects, maybe think of it as a reduced mobile version of Adobe After Effects. Its power lies in the fact that you can manipulate and keyframe a wide range of parameters (for instance movement/position, size, color, shape etc.) on different types of layers to create complex and highly individual animations, spruced up with a variety of cool effects drawn from an extensive library. It takes some learning to unleash the enormous potential and power that lies within the app and fiddling around with a heavy load of parameters and keyframes on a small(ish) touch screen can occasionally be a bit challenging but the clever UI (designed by the same person that made KineMaster so much fun to use) makes the process basically as good and accessible as it can get on a mobile device. The developers also just added effect presets in a recent update which should make it easier for beginners who might be somewhat intimidated by manually keyframing parameters. Pre-designed templates for graphics and animations created by the dev team or other users will make things even more accessible in the future – some are already available but still too few to fully convince passionate users of apps such as the very popular but discontinued Legend. Alight Motion is definitely worth checking out as you can create amazing things with it (like explainer videos or animated info graphics), if you are willing to accept a small learning curve and invest some time. This is coming from someone who regularly throws in the towel trying to get the hang of Apple’s dedicated desktop motion graphics software Motion. Alight Motion has become the first application in this category in which I actually feel like I know what I’m doing – sort of at least. One very cool thing is that you can also use Alight Motion as a photo/still graphics editor since it lets you export the current timeline frame as a png, even with transparency! The app is free to download but to access certain features and export without a watermark you have to get a subscription which is currently around 28€ per year or 4.49 on a monthly basis.
Special mention (Automated Editing): Quik
Sometimes, things have to go quik-ly and you don’t have the time or ambition to assemble your clips manually. While I’m generally not a big fan of automated video editing processes, GoPro’s free Quik video editing app can come in handy at times. You just select a bunch of photos or videos, an animation style, your desired aspect ratio (16:9, 9:16, 1:1) and the app creates an automatic edit for you based on what it thinks are the best bits and pieces. In case you don’t like the results you have the option to change things around and select excerpts that you prefer – generally, manual control is rather limited though and it’s definitely not for more advanced edits. It’s also better suited for purely visual edits without important scenes relying on the original audio (like a person talking and saying something of interest). GoPro, who acquired the app in the past, is apparently working on a successor to Quik and will eventually pull this one from the Google PlayStore later in 2021 but here’s hope that the “new Quik” will be just as useful and accessible.
Special mention (360 Video Editing): V360
While 360 video hasn’t exactly become mainstream, I don’t want to ignore it completely for this post. Owners of a 360 camera (like the Insta360 One X2 I wrote about recently) usually get a companion mobile app along with the hardware which also allows basic editing. In the case of the Insta360 app you actually get quite a range of tools but it’s more geared towards reframing and exporting as a traditional flat video. You can only export a single clip in true 360 format. So if you want to create a story with multiple 360 video clips and also export as true, immersive 360 video with the appropriate metadata for 360 playback, you need to use a 3rd party app. I have already mentioned V360 in one of my very early blog posts but I want to come back to it as the landscape hasn’t really changed since then. V360 gives you a set of basic editing tools to create a 360 video story with multiple clips. You can arrange the clips in the desired order, trim and split them, add music and titles/text. It’s rather basic but good for what it is, with a clean interface and exports in original resolution (at least up to 5.7k which I was able to test). The free version doesn’t allow you to add transition effects between the clips and has a V360 branded bumper clip at the end that you can only delete in the paid version which is 4.99€. There are two other solid 360 video editors (Collect and VeeR Editor) which are comparable and even offer some additional/different features but I personally like V360 best although it has to be said that the app hasn’t seen an update in over two years.
What’s on the horizon?
There’s one big name in mobile editing town that’s missing from the Android platform so far – of course I’m talking about LumaFusion. According to LumaTouch, the company behind LumaFusion, they are currently probing an Android version and apparently have already hired some dedicated developers. I therefore suspect that despite the various challenges that such a demanding app like LumaFusion will encounter in creating a port for a different mobile operating system, we will see at least an early beta version in 2021. Furthermore, despite not having any concrete evidence, I assume that an Android version of Videoleap, another popular iOS-only video editor, might also be currently in the works. Not quite as advanced and feature-packed as LumaFusion, it’s pretty much on par in many respects with the current top dogs on Android. So while there definitely is competition, I also assume that the app’s demands are certainly within what can be achieved on Android and the fact that they have already brought other apps from their portfolio to Android indicates that they have some interest in the platform.
As always, if you have questions or comments, drop them below or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter about important things that happened in the world of mobile video.
A couple of years ago, 360° (video) cameras burst onto the scene and seemed to be all the new rage for a while. The initial excitement faded relatively quickly however when producers realized that this kind of video didn’t really resonate as much as they thought it would with the public – at least in the form of immersive VR (Virtual Reality) content for which you need extra hardware, hardware that most didn’t bother to get or didn’t get hooked on. From a creator’s side, 360 video also involved some extra and – dare I say – fairly tedious workflow steps to deliver the final product (I have one word for you: stitching). That’s not to say that this extraordinary form of video doesn’t have value or vanished into total obscurity – it just didn’t become a mainstream trend.
Among the companies that heavily invested in 360 cameras was Shenzen-based Insta360. They offered a wide variety of different devices: Some standalone, some that were meant to be physically connected to smartphones. I actually got the Insta360 Air for Android devices and while it was not a bad product at all and fun for a short while, the process of connecting it to the USB port of the phone when using it but then taking it off again when putting the phone back in your pocket or using it for other things quickly sucked out the motivation to keep using it.
Repurposing 360 video
While continuing to develop new 360 cameras, Insta360 however realized that 360 video could be utilized for something else than just regular 360 spherical video: Overcapture and subsequent reframing for “traditional”, “flat” video. What does this mean in plain English? Well, the original spherical video that is captured is much bigger in terms of resolution/size than the one that you want as a final product (for instance classic 1920×1080) which gives you the freedom to choose your angle and perspective in post production and even create virtual camera movement and other cool effects. Insta360 by no means invented this idea but they were clever enough to shift their focus towards this use case. Add to that the marketing gold feature of the “invisible selfie-stick” (taking advantage of a dual-lens 360 camera’s blindspot between its lenses), brilliant “Flow State” stabilization and a powerful mobile app (Android & iOS) full of tricks, you’ll end up with a significant popularity boost for your products!
The One X and the wait for a true successor
The one camera that really proved to be an instant and long-lasting success for Insta360 was the One X which was released in 2018. A very compact & slick form factor, ease of use and very decent image quality (except in low light) plus the clever companion app breathed some much-needed life into a fairly wrinkled and deflated 360 video camera balloon. In early 2020 (you know, the days when most of us still didn’t know there was a global pandemic at our doorstep), Insta360 surprised us by not releasing a direct successor to everybody’s darling (the One X) but the modular One R, a flexible and innovative but slightly clunky brother to the One X. It wasn’t until the end of October that Insta360 finally revealed the true successor to the One X, the One X2.
In the months prior to the announcement of the One X2, I had actually thought about getting the original One X (I wasn’t fully convinced by the One R) but it was sold out in most places and there were some things that bothered me about the camera. To my delight, Insta360 seemed to have addressed most of the issues that me (and obviously many others) had with the original One X: They improved the relatively poor battery life by making room for a bigger battery, they added the ability to connect an external mic (both wirelessly through Bluetooth and via the USB-C port), they included a better screen on which you could actually see things and change settings in bright sunlight, they gave you the option to stick on lens guards for protecting the delicate protruding lenses and they made it more rugged including an IPX8 waterproof certification (up to 10m) and a less flimsy thread for mounting it to a stick or tripod. All good then? Not quite. Just by looking at the spec sheet, people realized that there wasn’t any kind of upgrade in terms of video resolution or even just frame rates. It’s basically the same as the One X. It maxes out at 5.7k (5760×2880) at 30fps (with options for 25 and 24), 4k at 50fps and 3k at 100fps. The maximum bitrate is 125 Mbit/s. I’m sure quite a few folks had hoped for 8k (to get on par with the Kandao Qoocam 8K) or at the very least a 50/60fps option for 5.7k. Well, tough luck.
While I can certainly understand some of the frustration about the fact that there hasn’t been any kind of bump in resolution or frame rates in 2 years, putting 8K in such a small device and also have the footage work for editing on today’s mobile devices probably wasn’t a step Insta360 was ready to take because of the possibility of a worse user experience despite higher resolution image quality. Personally, I wasn’t bothered too much by this since the other hardware improvements over the One X were good enough for me to go ahead and make the purchase. And this is where my own frustrations began…
Insta360 & me: It’s somewhat difficult…
While I was browsing the official Insta360 store to place my order for the One X2, I noticed a pop-up that said that you could get 5% off your purchase if you sign up for their newsletter. They did exclude certain cameras and accessories but the One X2 was mentioned nowhere. So I thought, “Oh, great! This just comes at the right time!”, and signed up for the newsletter. After getting the discount code however, entering it during the check-out always returned a “Code invalid” error message. I took to Twitter to ask them about this – no reply. I contacted their support by eMail and they eventually and rather flippantly told me something like “Oh, we just forgot to put the X2 on the exclusion list, sorry, it’s not eligible!”. Oh yes, me and the Insta360 support were off to a great start!
Wanting to equip myself with the (for me) most important accessories I intended to purchase a pair of spare batteries and the microphone adapter (USB-C to 3.5mm). I could write a whole rant about how outrageous I find the fact that literally everyone seems to make proprietary USB-C to 3.5mm adapters that don’t work with other brands/products. E-waste galore! Anyway, there’s a USB-C to 3.5mm microphone adapter from Insta360 available for the One R and I thought, well maybe at least within the Insta360 ecosystem, there should be some cross-device compatibility. Hell no, they told me the microphone adapter for the One R doesn’t work with the One X2. Ok, so I need to purchase the more expensive new one for the X2 – swell! But wait, I can’t because while it’s listed in the Insta360 store, it’s not available yet. And neither are extra batteries. The next bummer. So I bought the Creator Kit including the “invisible” selfie-stick, a small tripod, a microSD card, a lens cap and a pair of lens guards.
A couple of weeks later, the package arrived – no problem, in the era of Covid I’m definitely willing to cut some slack in terms of delivery times and the merchandise is sent from China so it has quite a way to Germany. I opened the package, took out the items and checked them to see if anything’s broken. I noticed that one of the lens guards had a small blemish/scratch on it. I put them on the camera anyway thinking maybe it doesn’t really show in the footage. Well, it did. A bit annoying but stuff like that happens, a lemon. I contacted the support again. They wanted me to take a picture of the affected lens guard. Okay. I sent them the picture. They blatantly replied that I should just buy a new one from their store, basically insinuating that it was me who damaged the lens guard. What a terrible customer service! I suppose I would have mustered up some understanding for their behaviour if I had contacted them a couple of days or weeks later after actually using the X2 for some time outdoors where stuff can quickly happen. But I got in touch with them the same day the delivery arrived and they should have been able to see that since the delivery had a tracking number. Also, this item costs 25 bucks in the Insta360 store, probably a single one or a few cents in production and I wasn’t even asking about a pair but only one – why make such a fuss about it? So there was some back-and-forth and only after I threatened to return the whole package and asked for a complete refund they finally agreed to send me a replacement pair of lens guards at no extra cost. On a slightly positive note, they did arrive very quickly only a couple of days later.
Is the Insta360 One X2 actually a good camera?
So what an excessive prelude I have written! What about the camera itself? I have to admit that for the most part, it’s been a lot of fun so far after using it for about a month. The design is rugged yet still beautifully simplistic and compact, the image quality in bright, sunny conditions is really good (if you don’t mind that slightly over-sharpened wide-angle look and that it’s still “only” 5.7k – remember this resolution is for the whole 360 image so it’s not equivalent to a 5.7k “flat” image), the stabilization is generally amazing (as long as the camera and its sensor are not exposed to extreme physical shakes which the software stabilization can’t compensate for) and the reframing feature in combination with the camera’s small size and weight gives you immense flexibility in creating very interesting and extraordinary shots.
Sure, it also has some weaknesses: Despite having a 5.7k 360 resolution, if you want to export as a regular flat video, you are limited to 1080p. If you need your final video to be in UHD/4K non-360 resolution, this camera is not for you. The relatively small sensor size (I wasn’t able to find out the exact size for the X2 but I assume it’s the same as the One X, 1/2.3″) makes low-light situations at night or indoors a challenge despite a (fixed) aperture of f/2.0 – even a heavily overcast daytime sky can prove less than ideal. Yes, a slightly bigger sensor compared to its predecessors would have been welcome. The noticeable amount of image noise that is introduced by auto-exposure in such dim conditions can be reduced by exposing manually (you can set shutter speed and ISO) but then of course you just might end up with an image that’s quite dark. The small sensor also doesn’t allow for any fancy “cinematic” bokeh but in combination with the fixed focus it also has an upside that shouldn’t be underestimated for self-shooters: You don’t have to worry about a pulsating auto-focus or being out of focus as everything is always in focus. You can also shoot video in LOG (flatter image for more grading flexibility) and HDR (improved dynamic range in bright conditions) modes. Furthermore, there’s a dedicated non-360 video mode with a 150 degree field-of-view but except for the fact that you get a slight bump in resolution compared to flat reframed 360 video (1440p vs. 1080p) and smaller file sizes (you can also shoot your 5.7k in H.265 codec to save space), I don’t see me using this a lot as you lose all the flexibility in post.
While it’s good that all the stitching is done automatically and the camera does a fairly good job, it’s not perfect and you should definitely familiarize yourself with where the (video) stitchline goes to avoid it in the areas where you capture important objects or persons, particularly faces. As a rule of thumb when filming yourself or others you should always have one of the two lenses pointed towards you/the person and not face the side of the camera. It’s fairly easy to do if you usually have the camera in the same position relative to yourself but becomes more tricky when you include elaborate camera movements (which you probably will as the X2 basically invites you to do this!).
Regarding the audio, the internal 4-mic ambi-sonic set up can produce good results for ambient sound, particularly if you have the camera close to the sound source like when you have it on a stick pointing down and you are walking over fresh snow, dead leaves, gravel etc. For recording voices in good quality, you also need to be pretty close to the camera’s mics, having it on a fully extended selfie-stick isn’t ideal. If you want to use the X2 on an extended stick and talk to the camera you should use an external mic, either one that is directly connected to the camera or plugged into an external recorder, then having to sync audio and video later in post. As I have mentioned before, the X2 now does offer support for external mics via the USB-C charging port with the right USB-C-to-3.5mm adapter and also via Bluetooth. Insta360 highlights in their marketing that you can use Apple’s AirPods (Pro) but you can also other mics that work via Bluetooth. The audio sample rate of Bluetooth mics is currently limited to 16kHz by standard but depending on the used mic you can get decent audio. I’ll probably make a separate article on using external mics with the X2 once my USB-C to 3.5mm adapter arrives. Wait, does the X2 shoot 360 photos as well? Of course it does, they turn out quite decent, particularly with the new “Pure Shot” feature and the stichting is better than in video mode. It’s no secret though that the X2 has a focus on video with all its abilities and for those that mainly care about 360 photography for virtual tours etc., the offerings in the Ricoh Theta line will probably be the better choice.
The Insta360 mobile app
The Insta360 app (Android & iOS) might deserve its own article to get into detail but suffice it to say that while it can seem a bit overwhelming and cluttered occasionally and you also still experience glitches now and then, it’s very powerful and generally works well. Do note however that if you want to export in full 5.7k resolution as a 360 video you have to transfer the original files to a desktop computer and work with them in the (free) Insta360 Studio software (Windows/macOS) as export from the mobile app is limited to 4K. You should also be aware of the fact that neither the mobile app nor the desktop software works as a fully-fledged traditional video editor for immersive 360 video where you can have multiple clips on a timeline and arrange them for a story. In the mobile app, you do get such an editing environment (“Stories” – “My Stories” – “+ Create a story”) but while you can use your original spherical 360 footage here, you can only export the project as a (reframed) flat video (max resolution 2560×1440). If you need your export to be an actual 360 video with according metadata, you can only do this one clip at a time outside the “Stories” editing workspace. But as mentioned before, Insta360 focuses on the reframing of 360 video with its cameras and software, so not too many people might be bothered by that. One thing that really got on my nerves while editing within the app on an iPad: When you are connected to the X2 over WiFi, certain parts of the app that rely on a data connection don’t work, for instance you are not able to browse all the features of the shot lab (only those that have been cached before) or preview/download music tracks for the video. This is less of a problem on a phone where you still can have a mobile data connection while using a WiFi connection to the X2 (if you don’t mind using up mobile data) but on an iPad or any device that doesn’t have an alternative internet connection, it’s quite annoying. You have to download the clip, then disconnect from the X2, re-connect to your home WiFi and then download the track to use.
Who is the One X2 for?
Well, I’d say that it can be particularly useful for solo-shooters and solo-creators for several reasons: Most of all you don’t have to worry much about missing something important around you while shooting since you are capturing a 360 image and can choose the angle in post (reframing/keyframed reframing) if you export as a regular video. This can be extremely useful for scenarios where there’s a lot to see or happening around you, like if you are travel-vlogging from interesting locations or are reporting from within a crowd – or just generally if you want to do a piece-to-camera but also show the viewer what you are looking at the same moment. Insta360’s software stabilization is brilliant and comparable to a gimbal and the “invisible” selfie-stick makes it look like someone else is filming you. The stick and the compact form of the camera also lets you move the camera to places that seem impossible otherwise. With the right technique you can even do fake “drone” shots. Therefore it also makes sense to have the X2 in your tool kit just for special shots, even if you neither are a vlogger, a journalist nor interested in “true” 360 video.
A worthy upgrade from the One X / One R?
Should you upgrade if you have a One X or One R? Yes and no. If you are happy with the battery life of the One X or the form factor of the One R and were mainly hoping for improved image quality in terms of resolution / higher frame rates, then no, the One X2 does not do the trick, it’s more of a One X 1.5 in some ways. However, if you are bothered by some “peripheral” issues like poor battery life, very limited functionality of the screen/display, lack of external microphone support (One X) or the slightly clunky and cumbersome form factor / handling (One R) and you are happy with a 5.7k resolution, the X2 is definitely the better camera overall. If you have never owned a 360 (video) camera, this is a great place to start, despite its quirks – just be aware that Insta360’s support can be surprisingly cranky and poor in case you run into any issues.
As always, if you have questions or comments, drop them below or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter about important things that happened in the world of mobile video.
While writing my last blog post about Google Recorder 2.0, I stumbled upon a hack that can also be utilized for another app from Google, one that currently understands over 70 languages, not only English: It’s called “Live Transcribe & Sound Notifications” and is available for pretty much every Android device. Have you always been looking for a tool that transcribes your audio recordings but doesn’t require an expensive subscription? Here’s what I like to think is a very useful and simple trick for achieving this on an Android phone. You will need the following things:
Android device running at least Android 5.0 Lollipop (if your phone is less than 5 years old, you should be safe!)
an internet connection (either mobile data or wifi)
a quiet environment
Let’s say you have recorded some audio like an interview, a meeting, a vox pop, a voice-over for video or even a podcast on your smartphone (look here for some good audio recorder apps) and would like to have a text transcription of it. If you read this before making such a recording, do include a few seconds of silence before having someone talk in the recording and it’s also important that the recording is of good quality in terms of speech clarity, the reasons will become obvious soon.
Here’s how it works!
Open Live Transcribe and check the input language displayed in the bottom toolbar (if the toolbar isn’t there, just tap on the screen somewhere). It needs to be the same as the recording you want to have transcribed. If it’s a different one, tap on the gear icon and then on “More settings”. Choose the correct language. Unlike Google Recorder which I wrote about in my last article, Live Transcribe works with a vast number of languages, not only English. Also unlike Recorder however, Live Transcribe needs an active internet connection to transcribe, you can’t use it offline! If you are planning on pasting the transcription into a context with a white background later on, you should make sure that “Dark Theme” is disabled in Live Transcribe. Otherwise you will be pasting white text onto a white background. Leave the settings menu and check that Live Transcribe’s main screen says “Ready to transcribe” in the center. Now double-check that you are in a quiet environment, leave Live Transcribe and open the audio recording app. Locate the recording you want to have transcribed and start the playback of the file (do make sure the speaker volume is sufficient!), then quickly switch over to Live Transcribe. One way to do this is to use Android’s “Recent Apps” feature which can be accessed by tapping on the square icon in the 3-button navigation bar – some Android phone makers use a different icon, Samsung for instance now has three vertical lines instead of a square. If you are using gesture navigation, swipe up from the bottom and hold. But you can also just leave the audio recording app and open Live Transcribe again without going into recent apps. The recording will keep playing with Live Transcribe picking up the audio from the phone’s speaker(s) and doing its transcription thing as if someone was talking into the phone’s mic directly. This actually works! Don’t worry if you notice mistakes in the transcription, you can fix them later. Once the recording and subsequently the transcription is finished, long-tap on any word, choose “Select transcription” and then “Copy”. You have now copied the whole transcription to the clipboard and can paste it anywhere you like: eMail, Google Docs etc. That’s also where you are now able to correct any mistakes that Live Transcribe has made (within Live Transcribe, there’s no option for editing the transcription yet). Two more things: You can have Live Transcribe save your transcripts for three days (activate it in the settings or activate auto-save under “More settings”) and if you want to clear out the app’s transcription cache, you can also do this under “More settings”, then choose “Delete history”.
Can you do the same with video recordings?
What about video recordings? Could you have them transcribed via Live Transcribe as well? Basically yes, but it’s not quite as easy. That’s if you want to do it using only one device (it’s very easy if you use a second device for playback). When you leave an app that’s playing back a video, the video (and with it its audio) will stop playing so there’s nothing for Live Transcribe to listen to. You can work around this by using Android’s split-screen or multi-window feature to actively run more than one app at the same time. On Android 7 and 8 you are able to access split-screen apps by long-pressing the square icon (recent apps) in the bottom navigation bar and select the app(s) you want to run in split-screen mode. Things have changed with Android 9 however. For one, gesture navigation was introduced as an alternative to the “old” 3-button-navigation bar. So if you are using gesture navigation, you access recent apps by swiping up from the bottom and then hold. If you use the 3-button-navigation, long pressing the square icon doesn’t do anything anymore. Instead, just tap it once to access the recent apps view, tap on the app’s icon at the top of the window and you will get a pop-up menu. Depending on what Android phone you are using the menu will have slightly different items, or at least they are named differently: On my LG G8X I get “App info”, “Multi window”, “Pop-up window” and “Pin app”, on my Pixel 3 I get “App info”, “Split screen”, “Freeform” and “Pause app”. The items you will want to choose to run two apps side by side are “Multi window” (G8X) / “Split screen” (Pixel 3) which will split the screen in half or “Pop-up window” (G8X) / “Freeform” (Pixel 3) which will display the app(s) in a small, desktop-like window that you can move around freely. By doing this, you can playback a video clip and have Live Transcribe running at the same time. Of course you can also use this feature to have both Live Transcribe and the playback of an audio recording app on the same screen simultaneously but for audio file transcriptions, you don’t have to go the extra mile.
Can I do this on an iPhone as well?
Google has a whole range of apps for iOS, but unfortunately, Live Transcribe isn’t among them – it’s currently Android-only. But hey, maybe you have an older Android phone in your drawer that you could put to good use again? That being said, there is the possibility that Google will eventually release an iOS version of Live Transcribe or Apple will come up with an app that does something similar. I also thought of another way, using a Google app that is already available for iOS: Google Translate. Yes, it’s meant for translation and not transcription but in the Android version, you can also find a “Transcribe” button. Initially, using this will only give you a transcription of the translated language but if you tap the cog wheel in the bottom left corner and choose “Show original text”, you will actually get a transcription of the original language which you can then copy and paste. When checking the iOS version of Translate though, I noticed that there is no “Transcribe” button. There is a “Voice” button (which in the Android version has been moved to the search bar) but this will only pick up a limited amount of input and is quite slow. There’s also no “Show original text” option. I suppose there might be a chance that Google will update its iOS version to match the Android version but there are no guarantees. The Android version of Google Photos has had a pretty impressive video stabilization feature for quite a while now, something that is still missing from the iOS version. It might be a purely strategic thing and Google wants to give certain features only to users of its own mobile operating system, but it might also be for technical reasons like that the core transcription engine is deeply rooted in the Android system and it’s just not possible to tap into this on iOS where Google is “just” a 3rd party app developer. Let’s see how things will turn out in the coming months.
If you have any questions or comments, leave them here or hit me up on the Twitter @smartfilming. Do also consider to subscribe to my Telegram Newsletter to get notified about new blog posts and receive the new “Ten Takeaways Telegram” monthly bullet point recap of what happened in the world of mobile video creation during the last four weeks.
Not too long ago, I wrote an article about my favorite audio recorder apps for Android. One of the apps I included was Google Recorder. Officially, the app is only available for Pixel phones but can be sideloaded to a range of other Android devices. Google Recorder has a unique place among audio recording apps because of one killer feature: it transcribes audio into text – offline and for free. This can be extremely useful for a lot of people, particularly journalists. With the launch of the new Pixel 5 / Pixel 4a 5G, Google has introduced version 2.0 of Recorder and it packs some really exciting new features and improvements!
Edit the transcript
As good as Google’s voice recognition and subsequent transcription works, it occasionally makes mistakes. Before version 2.0, you weren’t able to make any kind of edits to the transcription within the app (it was possible to export the text and then make corrections). With the update you can now edit your transcript, however only one word at a time.
Edit your recording
Another new feature of version 2.0 is that you can now edit the audio recording itself by cropping/trimming (cut off something at the beginning/end) or removing a part in the middle. You can actually also do this by removing words from the transcript and it will automatically cut the audio file accordingly! You can access this feature by tapping on the scissors icon in the top right corner when having a recording selected. This particular feature can also come in very handy to bypass a limitation of another new feature which I will talk about in a second.
Create a video with waveforms and captions
Quite possibly the coolest new feature of version 2.0 is the ability to create a video with waveforms and captions from your audio file. This is very useful for sharing audio snippets or teasers on social networks where everything is primarily focused on visual impressions. I was even more delighted to find that you can customize a couple of things for the video: You can choose whether you want the waveforms plus captions or only the waveform. You can also select the aspect ratio of the video (square, portrait, landscape) and the color theme (dark/light). This is great! One thing they could have added is an option to choose a photo as a background image for the video. You will also notice that there are two watermarks at the bottom (the Recorder app logo and a “Recorded on Pixel” branding), unfortunately there’s no way to hide them before exporting. You can however use a separate video editing app to crop the image or place a black/white layer over the bottom part to cover it up. One last thing to mention: You can only create videos from clips that have a maximum length of 60 seconds. So for longer recordings you need to cut out a chunk via the editing tool, save it as a copy and then create your video from this excerpt. The export resolution of the video is 1080×1080 for square, 720×1280 for portrait and 1280×720 for landscape, all at 30fps.
Perfect? Not quite!
Two shortcomings that I already pointed out in my other blog post and that unfortunately haven’t been improved with version 2.0: Google Recorder is still limited to English. I’m sure though that support for other languages will be coming soon because Google’s own Live Transcribe app which I think uses the very same engine for voice recognition and transcription is already polyglot. The second minor set-back concerns its potential use in a professional (broadcast) environment: The app only records with a sample rate of 32kHz. It’s not a problem for professional use per se because I think it’s fair to say that you can also call it a “professional” tool when you “just” use the transcription for your work. But if you want to use the audio recording as such (say for broadcast radio), the sample rate doesn’t match the usual standards of 44.1/48 kHz. If Google Recorder allowed importing audio files from outside the app, this limitation could be circumvented but you can only use files recorded within the app – and I don’t think this is going to change soon as Google probably wants the user experience to be as easy as possible and importing files from other apps might not fit the bill. Ease of use is probably also the reason for not being able to customize anything in terms of recording quality. The sample rate of 32kHz should however be just fine for less “official” formats like podcasts or social media / the web. I have also thought of a hack to record in higher quality but still take advantage of Google Recorder’s features: Record you audio with another app that gives you a higher sample rate (for instance ShurePlus Motiv) and then play it back on your phone while simultaneously recording with Google Recorder. Google Recorder picks up the playback from your phone’s speaker and treats it as if you were talking into the mic. This actually works quite well but of course you need to be in a quiet environment. If you want to use the app’s ability to create a video with waveforms and captions but incorporate the original audio and not the lower quality re-recording, export the re-recording as a video file, then import the video into a video editing app that lets you exchange the audio with the original higher quality recording.
For which devices is Google Recorder available?
Officially, Google Recorder is only available for Google’s own Pixel phones, excluding the very first Pixel (XL). These are: Pixel 2 (XL), Pixel 3 (XL), Pixel 3a, Pixel 4 (XL), Pixel 4a, Pixel 4a 5G and Pixel 5. If you have used version 1 but can’t find the new features, you need to update the app. So are you totally out of luck if you don’t own a Google Pixel? Not quite! It’s actually possible to sideload the app to a whole range of other Android phones running Android 9 or newer (version 1 of Google Recorder) or Android 10 or newer (version 2). However, while the app can be installed on other Android devices that – in theory – should be able to run it, not all do so without problems in reality. I have had excellent results with phones from LG (V30 running version 1 of Google Recorder and now the G8X running version 2) where the app seems to work flawlessly. It also works well on the Huawei P30. On the other hand, the OnePlus 3 only does the recording part, not the transcription. And the Xiaomi Pocophone F1 lets you install and open the app but the moment you try to start a recording, the app crashes. Bottom line: Your mileage will vary with non-Pixel devices and if you’re about to buy a new phone and want to make sure Google Recorder 2.0 runs with all relevant features, you should get one of the recent Pixel phones. If you have a non-Pixel phone that theoretically should be able to run the app by sideloading it, just give it a go, you might be lucky!
How can I sideload the app and is it safe?
Unlike with Apple’s iOS, Android lets you sideload apps to your device. Sideloading basically means you can install apps from other sources than the official app store, in the case of Android the official app store is Google’s Play Store. When you download an Android app from outside the Play Store, you will get an apk file that you can then open and install. For security reasons, installs from other sources than the Play Store are disabled by default on Android and the system will give you a warning when trying to install an apk file. You can override this protective layer though by allowing certain apps (in most cases it will be the browser which you used to download the apk file) to perform installs from so-called “unknown sources”. I highly recommend only downloading and installing apk files from sites you trust. Personally I have only downloaded apks from XDA Developers and APKMirror so far. Now if you want to rush over to APKMirror and get the Google Recorder 2.0 apk, there’s one more hoop you have to jump through, at last for the moment: The download is provided not as a single apk file but as an “apk bundle”, this is a different way of packaging an app to reduce the file size. But while Android can handle installing single-file apks out of the box, you need an extra app to install apk bundles. I used APKMirror’s own APK Mirror Installer which you can download as a regular app from the Google Play Store. After downloading both the APKMirror Installer and the Google Recorder 2.0 apk bundle onto your Android device, open APKMirror, tap “Browse files” and select the Google Recorder 2.0 apk bundle (it has an .apkm file extension). Choose “Install package” and you’re finally done!
To wrap it up: With the 2.0 update, Google has immensely improved its fascinating Recorder app and made it an even more powerful tool for recording, auto-transcribing and sharing audio, one that might be a decisive factor for choosing a Google Pixel over any other phone, be it Android or an iPhone. What’s your experience with Google Recorder? Have you used it? If you have sideloaded it onto a non-Pixel device, how does it work? Let me know in the comments or hit me up on the Twitter @smartfilming. If you like this article, do consider subscribing to my Telegram Newsletter to get notified about new blog posts.
I’ve been thinking about getting my first full-frame DSLM camera for some time now, there are a whole lot of very tempting offerings out there. Not one however was able to tick all the boxes that are most important for me – including excellent auto-focus, great battery life, no recording limit and a price tag of around 2k. Very recently, Sony announced the Alpha 7c, Sony’s smallest full-frame camera so far. While the A7c recycles a lot of established components from earlier Sony cameras and received quite a bit of flak for that (same sensor! no 4K60! no 10bit!), it did include some minor improvements over the Alpha 7 III that might actually be a major deal for some: a fully articulating screen, eye-tracking auto focus for video and unlimited recording. On the other hand, reviewers found that the in-body image stabilization via sensor shift (IBIS) was curiously worse than that of the A7 III.
While watching some A7c-related videos on YouTube a few days ago, I stumbled upon a very interesting video by Gordon Laing though:
He reveals that the A7c has a “hidden” feature that relates to video stabilization. I say “hidden” because Sony for whatever reason didn’t bother to mention it at all when promoting its latest camera release, totally focussing on its small form factor. So Sony’s A7c has an inbuilt gyroscope sensor that records metadata about the camera’s whereabouts in 3D space when filming, so basically every shake you make leaves a metadata trace in the file. This metadata can be used by Sony’s free desktop software Catalyst Browse to correct the shakes and stabilize the footage in post. As you can see in Gordon Laing’s video, the results are very impressive, almost gimbal-like! This was also picked up by some other YouTubers like Camera Conspiracies and Lens Library. Sure, it’s another step in post production that you have to do (and the software seems to take its time to process footage) but the prospect of not having to pack a gimbal and balance it and instead becoming even more mobile, is very promising in my opinion.
Now how does this relate to smartphone videography? As you might know, all modern smartphones (unlike most traditional cameras) do have gyro sensors in them, the most basic thing they’re good for is to control the screen’s orientation (portrait or landscape) based on how you’re holding your phone. Why not take advantage of this in a more advanced way to record gyro metadata when capturing video? Google already has a pretty amazing and free software stabilization feature in its Android version of Photos (many still don’t know about it!) but I’m quite sure this is not (yet) based on recorded gyro metadata. While it might not be that easy for a 3rd party app like Filmic Pro to syphon the gyro metadata off the sensor it should be generally possible. And what’s more: With the smartphone being not only a camera but also a computer that runs software, the post stabilization process (it might be too much for a processor to handle this in real time while shooting!) could be done on the very same device unlike when shooting on a DSLM like the A7c. Of course this would also mean that we need some sort of a mobile Catalyst Browse app for Android and iOS but maybe pro mobile video editing apps like LumaFusion or KineMaster could make this happen in the near future? It will require powerful processors but I think I’m hardly exaggerating when I say that most modern flagship phones can be more powerful than a lot of desktop computers. I’m not a software developer so maybe I’m asking too much (at least right now) but I sure think it’s worth a thought, well actually more than just one!
What do you think? Would this be something you are interested in? How do you like the results? Let me know in the comments or hit me up on the Twitter @smartfilming. If you like this blog post, do consider signing up for my Telegram Newsletter where you will be notified about new blog posts.
One of the things I really like about Apple’s ecosystem is the cross-platform integration of a functionality called “AirDrop” which lets you fast, wirelessly and offline transfer (big) files between Apple devices that are close to each other, be it Mac, iPhone or iPad. This is extremely helpful when transferring video files which as we all know can get pretty heavy these days, particularly if one records in UHD/4K. Shooting on an iPhone and then transferring the footage to an iPad for editing with a bigger screen is a pretty popular workflow. Android on the other hand had something called “WiFi Direct” relatively early in its career but it never got picked up consistently by phone makers which preferred to introduce their own proprietary file transfer solutions which of course only worked with phones/devices of the same brand. So for quite a while I resorted to third party apps like Feem and Send Anywhere that also worked cross-platform between mobile and desktop – Android, iOS, macOS and Windows. As for Android-to-Android device wireless file transfers, Google introduced an app called “Files Go” (today Files by Google) in late 2017 which was primarily a file explorer but also had the ability to share files offline to another device by creating a WiFi Direct connection. While the app ventured somewhat close towards becoming a system resource in that it came pre-installed on many new phones as part of Google’s app portfolio, it was hard to deny that Apple’s AirDrop was more easily accessible.
Google is finally giving Android proper wireless file sharing
Enter Nearby Share: Recently, Google started rolling out a new Android feature called “Nearby Share” that should soon be available on all Android devices that sport at least Android 6 Marshmallow (Android 6 was released in 2015, we’re now at Android 11). Nearby Share allows for fast wireless sharing of files to other nearby Android devices, even offline (that is without using an internet connection). The feature is distributed automatically via the Google Play Services app (which comes pre-installed on basically all Android devices) so you don’t need to download anything. Nearby Share is integrated into the Android system, it’s not a separate app. As of now, roughly 90% of my own Android devices (and believe me I own quite a few!) have already received Nearby Share.
Does your Android device have it already?
And here’s how to check if you have it: On your Android device, go into “Settings”, then select “Google” and then “Device connections”. You should now find an option called “Nearby Share” (not be confused with something called “Nearby”!). To use it, you need to activate it by switching the slider to “On”. If you have not yet activated Location and Bluetooth it will ask you to do so because that’s how it will look for and find other devices. There are also a couple of options: You can customize the name of your device (under which name it will be visible for other devices). You can select between three different “Device visibility” settings (All contacts, Some contacts, Hidden) and you can choose by which means the transfers are achieved (Data, Wi-Fi only or Without Internet). Regarding the last bit, I personally always switch to “Without Internet” so it uses the fast peer-to-peer WiFi Direct protocol and doesn’t consume any mobile data when not connected to regular WiFi. Before actually initiating the first file transfers I suggest one more thing (it’s not really necessary though): You can add Nearby Share to your Quick Settings. Quick Settings is the bunch of settings directly accessible when pulling down the notification shade from the top of the screen. Now it’s not exactly the same on all Android devices, but there’s usually a small pen icon in the Quick Settings which allows to add or remove certain items to/from the Quick Settings. Scroll down do find two horizontal lines that are intertwined (Nearby Share) and drag the icon to the main Quick Settings. The reason I recommend doing this is because you can easily make your device visible to others for Nearby Share or turn the feature on when it’s off. Long pressing the Nearby Share icon will also take you straight into the settings for Nearby Share without clicking and scrolling through the general settings.
How does it work?
So how does a file transfer via Nearby Share actually work? Keep in mind that Nearby Share is for sharing to physically nearby devices, not to someone on the other side of the globe!
Assuming you want to transfer one or multiple video files, locate the file(s) in your phone’s Gallery app (the native Gallery app or Google Photos). Select the one(s) you need and then tap the share button.
Now look for the Nearby Share icon on the share sheet and select it. If you are using Google Photos as your Gallery app it will give you three options, select “Actual size”. Your sharing device will immediately start looking for devices that are close by and have Nearby Share activated (it usually doesn’t have to be opened).
On your receiving device you will get a prompt “Device nearby is sharing. Tap to become visible” (If it doesn’t, open Nearby Share from the Quick Settings on the receiving device). After doing so, your receiving device will pop up on the radar of the sharing device.
Select your receiving device and tap “Accept” on the receiving device itself. The file transfer will start and you are done. Your transferred files will be available in the “Download” folder of your Gallery app.
Is it any good?
So far, Nearby Share worked really well for me and it makes transferring big files to other Android devices so much easier. It’s a bit of shame that unlike with phones, there aren’t too many powerful Android tablets out there to make a phone-tablet workflow a tempting proposition. It’s basically only Samsung that offers a tablet with flagship specs for video editing these days. The biggest shortcoming for me though is that it’s currently only available between Android devices and doesn’t build a bridge to desktop/laptop computers or iOS. This isn’t exactly a surprise. While Apple produces both mobile and desktop/laptop hardware with their own software, Google doesn’t really. “Laptops” is debatable because Google has Chromebook devices like the Pixelbook / Pixelbook Go and Nearby Share is supposed to roll out for their ChromeOS as well but I would assume most of us still associate “laptop” with devices running Windows, Linux or macOS. There’s actual hope though: Google is apparently planning to make Nearby Share part of its Chrome Browser and thereby opening up a whole new sharing world with the option to share to iOS, macOS, Windows and Linux. And even in its current state, Nearby Share can be very helpful in many situations, for instance when having multiple phoneographers in the field and you want to collect the footage on one device afterwards for editing or if as a journalist you talk to a person that filmed something interesting on his/her phone and wants to share it with you.
Does your Android device have Nearby Share? Have you used it already? How does it work for you? Let me know in the comments or hit me up on the Twitter @smartfilming. You might also want to have a look at Google’s own blog post about Nearby Share. If you like this blog, please consider subscribing to my Telegram Newsletter which will notify you when new posts are released.
After starting to write a blog post about multi-track audio editing apps on Android, I figured it might be useful to do one on field recorder apps first as a precursor so to speak. I chose to use the term “field recorder” as opposed to “audio recorder” since there’s a whole bunch of multi-track audio editing apps that also record audio. And while I’m mostly concerned with mobile videography on this blog, I think it can’t hurt to take a look at audio for once, particularly since field recorder apps can also be used as independent audio recorders with a lavalier mic in a video production environment. I’ll have a look at six different apps of which each single one includes something interesting/useful. It will depend on your use case and personal taste which one qualifies as the best for you. Do note that most Android phones actually come with a native audio recording / voice memo app, some of which are quite good, but for the purpose of this article I will look at 3rd party apps only that are available for (almost) all Android devices. Well, with one exception…
RecForge II (Pro)
One of the first more advanced 3rd party audio recording apps I stumbled upon after getting a smartphone was RecForge (Pro). The UI was visually pleasant (somewhat futuristic) but not the most intuitive, I found navigating around slightly confusing in the beginning. Its successor RecForge II (Pro) got a new look which is less fancy, more focused, but the developer failed to iron out some of the UX issues I had with the app. Two examples: When you press the “Record” button on the main screen, the app takes you to an all-new recording screen with lots of different buttons, timeline, big waveforms and is already recording. I think it would be less confusing if the same button that started the recording remained present and pushing it again would stop the recording. Well, as a matter of fact I just found out that you can change this in the settings but then you don’t get any kind of waveform or audio level meter which is always good to have. When you stop the recording, you need to push a button that looks like an “eject” symbol to get back to the main screen which I consider a bit odd. That being said, RecForge II might have the most complete feature set of all the recording apps listed here. It records in a wide variety of formats including wav, mp3, m4a etc., has options for sample rates and bit rate, basic clip editing (missing a fade tool though!), live audio monitoring, gain control (positive and negative) and live audio level meters (to check/adjust before recording, preview mode needs to be activated in the settings), support for external mics, mic source selection, scheduled recordings, homescreen widgets, a conversion tool and much more. The free version gives you unlimited wav recording but automatically pauses every three minutes when recording in any other format (like mp3). Pro version without limitations and ads is 3.89€.
Easy Voice Recorder (Pro)
EVR is far and away the best audio recording app on Android when it comes to homescreen widgets, it has a whole variety of them, some minimal, some more elaborate. Just in case you don’t know: Widgets are a special feature of Android (iOS is currently playing catch-up) that lets you add certain app functionality directly to your home screen without having to open the app first. So for instance you can add a button that starts a recording directly to the homescreen. EVR is also the only audio recorder among my list that has a WearOS companion app which means you can launch and control a recording from a smartwatch. It has a range of useful features and options but it’s also missing some more advanced stuff: There’s currently no way to check audio levels or control gain before starting a recording and it’s also lacking the ability to do live monitoring via headphones. I have reached out to the developers and they acknowledged my request, saying that they will look into it but that significant changes to the app’s core would have to be made to provide this. If you like EVR but miss these features I strongly encourage you to contact the devs and make your voice heard! EVR lets you record in wav, m4a and 3gp formats in the free version, plus mp3 and aac in the paid upgrade. The paid upgrade also has more useful goodies in the form of a basic editing tool for trimming/cropping, the option to convert to other formats and automatic upload to the cloud. The paid pro version is 3.99€.
Voice Record Pro
This one’s a favorite of many on iOS and I’m glad that the developer decided to bring the app to Android as well. That being said, after launching it in 2018 and providing a few initial bug fixes, the developer hasn’t delivered a single update (be it bug fixes, let alone a feature drop) in over two years. It works reasonably well on most devices but certain (device-specific) glitches have not been addressed with the developer not being available for any kind of communication (I have tried on multiple occasions to no avail). It also lacks the transcription feature and the ability to adjust input gain of the iOS version if that’s important to you. VRP is unique among the apps mentioned here in that it allows you to create an mp4 video from a recorded audio file by adding an image and text to it. Useful for a quick share/teaser on social media platforms. The app has a great set of options for adjusting the quality of the recording, supports external mics and lets you check the input levels before and during a recording – no live monitoring via headphones though. A basic editing tool for trimming/cropping is included. VRP is free with ads. According to the GooglePlay store information, there’s supposed to be an in-app purchase but I have honestly not been able to locate it. I would be happy to pay a few bucks for this app and get rid of the ads but apparently it doesn’t seem possible (do let me know if you have found the IAP!). It’s a potentially great app but I wish the developer would make an effort to keep the Android version up to date.
I have to admit this one has possibly become my personal favorite for its clean and focused design/functionality, great basic editing tools and solid feature set, notwithstanding its integration with a range of Shure microphones (naturally, this means that it supports external mics and not only Shure mics if you’re worried about that). It’s also completely free without any ads or feature cut backs. Something I absolutely love about the app is the way you can easily apply fades at the beginning and end of a clip, just drag the handles, you can even mirror the fades automatically! The app records in wav format with the option to convert to aac afterwards. You can adjust positive gain before/during a recording, reducing the input level is only possible if you are using some kind of external interface however. The biggest shortcoming at the moment is the lack of an option for live audio monitoring via headphones (which is available in the iOS version of the app). I have been in touch with Shure and they are looking into it. It would also be nice to have one or two homescreen widgets for people who often use it and want to launch a recording as fast as possible, but that’s a minor complaint. All in all, this is a beautiful and excellent audio recording app from a renowned microphone manufacturer – do check it out!
If you are used to dedicated portable field recorders, you might find Field Recorder’s UI and functionality particularly appealing since it sort of mimics the appearance of such devices. Others however could be a bit intimidated by the somewhat busy upper half of the UI and the load of options in the settings menu. One very cool thing about FR is that it lets you rotate the UI which in the case of reverse portrait mode helps if you are using the (main) internal mic of the phone (instead of an external mic) which will usually be located at the bottom of the phone. If you are interviewing someone pointing this part towards the subject, the UI would be topsy-turvy for yourself unless you are able to rotate the UI independently from the device’s orientation. FR has you covered here. The app has an extensive range of options to customize the interface/recording process, includes live audio monitoring via headphones, supports the use of external mics and features an optional limiter. It’s missing the ability to edit/trim a recording though. FR records uncompressed wav files with the option to convert to mp3 after installing another app (‘Media Converter’) from the PlayStore to handle the conversion. There’s a homescreen widget but it’s a bit complicated to use. Field Recorder costs 4.99€, there’s no free version but I’d say it’s most definitely worth the price if you like its UI and feature set.
This one is probably the odd ball among the pack with very little to no control/settings options – but sporting a killer feature that by itself will let many folks crave it badly: It can auto-transcribe any recording offline (only English so far!) and search text within a recording completely for free! When sharing you have the option to only share the audio, only the text as a text file or both. You also have the ability to directly upload recordings to the cloud (GoogleDrive). Recordings are saved in m4a format with a sample rate of 32 kHz and a bitrate of 48Kbit/s. There’s currently no option for higher sample or bitrates or other recording formats like wav. But depending on what you are doing, this might not be a problem. There’s one relatively big catch to this: So far, it’s officially only available for Google’s Pixel devices (excluding the very first Pixel phone apparently). You can however sideload it (meaning installing it outside of the Google PlayStore via an apk file) to many other Android devices, XDA Developers has a great article on how to do that and which devices are currently supported. I sideloaded it to my LG V30 and it works really well. Note: You will need to allow app installs from external sources though first in the settings of your phone (it’s disabled by default for security reasons). Will it be officially available for non-Google Android devices in the future? There are arguments for both sides: Technically it shouldn’t be a problem since Google’s Live Transcribe app which basically taps into the same core functionality of transcribing audio is already available for many Android devices. Google might however want to keep this a special feature on Pixel devices, an incentive to pick a Pixel over other Android phones. We’ll see how that plays out over the next months. Some things Google Recorder is missing: While there’s a live waveform when recording (which is good), you don’t get an audio level meter, gain control, homescreen widgets or the ability to edit/trim a recording. As all the other apps listed here, it generally supports the use of external mics.
So which one is the best field recorder app for Android? Well, as indicated in the introduction to this article, there’s no clear answer. There are many very good ones and which one specifically suits you best will depend on your use case, what features you absolutely need and which features you can live without, if you love a complex interface with loads of options or like to keep it simple. The good thing is: With the exception of Field Recorder (which doesn’t have a free version) and Google Recorder (which has only limited availability) you will be able to test most of the apps for free to decide which one’s your top pick. And also remember: These are just a couple of candidates that I happen to like, there are many many more in the Google PlayStore and it’s entirely possible that there’s a great one I haven’t discovered yet. If you have a favorite one not listed here, do let me know in the comments or on Twitter @smartfilming. And stay tuned for an upcoming article about multi-track audio editing apps for Android. Last thing: If you like this blog, consider signing up for my Telegram newsletter via t.me/smartfilming to get notified about new posts.
One of the things more tech-savvy smartphone users often criticize about Google’s mobile operating system Android is the fact that new versions of the OS only roll out relatively slowly and to a somewhat limited number of (recent) devices, particularly when compared to new versions of Apple’s iOS for iPhones. There has been some progress (the current version Android 10 managed the fastest and widest roll-out of any Android version so far), but it’s still a long way to getting anywhere close to the swift and wide-spread roll-out of new iOS versions.
While in general I would definitely prefer to have faster and more wide-reaching availability of new Android versions, I also think that the topic is often way too dramatized, particularly since Google separated regular security patches from the OS version with Android 8 Oreo in 2017. If we look at this particularly from a smartphone videography perspective, there have been hardly any major feature updates to the Android system over the last years that would make having “the latest and greatest” an absolute must. In my opinion, the last crucial milestone was Android 5 Lollipop back in 2014 when Google added the ability for screen recording via third party apps and – most importantly – introduced the Camera2 API which gave developers access to more advanced camera controls like shutter speed and ISO. The following versions surely continued to further polish a now pretty mature mobile operating system and occasionally included generally useful new tweaks and features for the common user but nothing really groundbreaking in terms of mobile videography. The upcoming Android 11 (scheduled for late summer / early fall 2020) could actually be a new milestone however. After checking out the official Android 11 developer information site from Google and various articles (many by the excellent XDA Developers news outlet!) plus getting a (used) Pixel 3 to hop on the beta version of Android 11 myself, I have found a bunch of quite interesting things, some will be immediately accessible in Android 11, others will offer new possibilities for app developers to dig into.
Native Screen Recording
As mentioned before, Android 5 had already introduced the general ability for screen recording back in 2014 but only for 3rd party apps, not as a native OS functionality. While some Android phone makers actually added native screen recording to their phones it wasn’t available right out of the box for most devices. It did finally pop up as a system immanent feature in the beta version for Android 10 but was unfortunately dropped for the final release. Now it’s back on the Android 11 beta and I’m pretty sure it will make it to the finish line this time around! You can simply access this feature via the quick settings when pulling down the notification shade from the top. It’s not there by default but you can easily add it to the quick settings by tapping on the pen icon in the bottom left corner of the notification shade and then dragging the screen record tile to the quick settings. On my Pixel 3, the resolution of the recorded video is 2160×1080 or 1080×2160 depending on the orientation with a somewhat curious frame rate hovering around 40 to 45 fps.
Capturing System Audio
Directly related to the native screen recording is the ability to capture system/internal audio from the device. It’s something that Google wouldn’t allow up until now so all the screen recording apps that came out in the wake of Android 5 were only able to capture sound through the phone’s mic / an external mic or no sound at all, not the ‘clean’ audio of an incoming call or a video that you are playing back. When you launch the native screen recorder on Android 11, it asks you to pick between three options in terms of audio capture: “Microphone”, “Device audio” or “Device audio and microphone”. Why is this important? If you want to record a (video) call for instance, you should now be able to capture both ends directly into a mix or just get your interviewee’s audio without having your own side mixed in. The pop-up when launching the screen recorder also gives you the option to show touches while capturing which is great if you are doing a tutorial on how to use an app as viewers can see what buttons you touch during the process.
Airplane Mode doesn’t turn off Bluetooth
When recording video on a smartphone it’s generally a good thing to turn on Airplane Mode to prevent any kind of interference with your recording. Sure, most of the time you might get away with not paying attention to this… until an important shot gets ruined by an incoming call etc. So far, going into Airplane Mode killed Bluetooth (it’s possible to manually turn it on again) which probably isn’t that big of a deal for shooting video – yet. Most external Bluetooth mics are still lacking in terms of more professional audio quality but this might change soon and it’s already a viable option to use Bluetooth headphones for audio monitoring. It’s a welcome tweak then that when having a Bluetooth device paired to the phone, going into Airplane Mode won’t turn off Bluetooth automatically.
Automatically block notifications when using the camera
Filmic Pro actually already has an option to block notifications while using the app in its settings but Google apparently introduced a new API that will allow developers of camera apps to automatically block disruptive notifications and sounds when people are using the app. The next step could be a feature that would allow the user to automatically activate the airplane mode when launching a camera app.
Support for concurrent use of more than one camera
This one could be a biggie! Here’s a quote from Google’s official Android 11 “Features and API Overview” knowledge base: “Support for concurrent use of more than one camera. Android 11 adds APIs to query support for using more than one camera at a time, including both a front-facing and rear-facing camera.” To me, this very much sounds like the groundwork for giving camera apps the power to capture content from multiple cameras simultaneously. This is not completely new on Android phones. Various phone makers including the likes of Samsung, HTC, LG and Nokia have featured camera modes on some of their devices that let you capture a video with both the front and the rear camera at the same time, creating a split-screen video in the process. I actually wrote a whole article about it and its particular usefulness for covering live events with some sort of presenter. Whether people didn’t like the feature or didn’t even know it existed in the first place will probably remain in the dark (I assume it was the latter) but the fact remains that this very intriguing feature never grew any kind of significant popularity or wide-spread availability. The universal rise of multi-camera arrays on smartphones in the last years however really does call for a revival of this feature! Pretty much every phone nowadays has two or even more rear cameras and one could indeed think of quite a few use cases where a combination of rear and front cameras or both rear cameras (regular and wide-angle/tele) recording simultaneously might come in handy. Apple introduced a dedicated API with iOS 13 just last year and 3rd party developers jumped at the opportunity with Filmic Inc.’s CTO Christopher Cohen even being invited on stage at the Apple Event to show off “DoubleTake”. Unlike with the dual camera feature on certain Android devices before, you can also record the video streams into separate files instead of having a pre-mixed split-screen. It’s easy to see that this resource-intensive functionality would most likely only be available on powerful Android devices in the beginning (it even seems to be relatively fragmented on iOS at this time) but I really hope I’m not misinterpreting this info and some camera app developer can make it happen soon!
Control external devices
I’m not sure how much can actually come out of this but a new feature called “Quick Access Device Controls” specifically includes “cameras” in its explanatory text: “The Quick Access Device Controls feature, available starting in Android 11, allows the user to quickly view and control external devices such as lights, thermostats and cameras from the Android power menu”. From this, one might deduct that by “cameras” they probably refer to surveillance cameras (or some other internet-connected IoT smart device) but I suppose this could potentially be utilized for controlling other external devices in a media production environment as well so I’ll keep an eye on it and maybe a clever developer finds an ingenious application for this.
Removal of 4GB file size limit
Up until now, Android was only able to write maximum files sizes of around 4GB, a left-over from the very early days that remained unaddressed for too long. As a matter of fact, certain phone makers (Sony for instance) found a way to disable the file size limit in their version of the OS but it remained present on many devices. While this limitation was of little relevance to many (including certain mobile videographers!), it was a major nuisance for others (including me) who wanted to record longer interviews, workshops, events etc. Some camera apps would seemingly record continuously while splitting clips in the background when reaching the file size limit, some would automatically restart the recording, others just stop, forcing a manual restart by the user. With UHD/4K video slowly creeping into the mainstream, this matter got even more pressing in the last years and it’s really about time Android rids itself of this anachronistic relic. Well, it looks like this time is now!
Share Nearby / Nearby Sharing
The last feature I want to mention isn’t actually exclusive to the upcoming new Android version but I still decided to include it here. AirDrop has been a really useful feature on iOS for some time, it allows you to wirelessly transfer (big) files between iOS, iPadOS and MacOS devices without an internet connection. While Google launched its quite useful “Files” app some time ago which lets you among other things quickly send (big) files between Android devices without an active internet connection by using an ad-hoc wifi network and the WiFi direct protocol, it’s still a separate app and not baked into the OS itself. It also doesn’t span the bridge to the desktop if you want to send one or more video files from your phone to your computer for editing. A new feature called “Share Nearby” or “Nearby Sharing” which will be integrated into Android’s share sheet apparently aims to provide Android users with an AirDrop-like experience. And while I first thought that it will not reach beyond the Android OS, thereby seriously curtailing its usefulness, there is some information indicating it could actually link to desktop computers via the Google Chrome browser which would be really awesome! Share Nearby is supposed to roll out in August for all Android devices running Android 6 Marshmallow or newer.
As you can see, this time around there’s actually quite a list of (potentially) useful new features debuting with the new version of Android so it’s fair to say I’m really excited about the launch! What do you think? Let me know in the comments or hit me up on the Twitter @smartfilming. Also, feel free to sign up for my Telegram newsletter t.me/smartfilming to get notified about new blog posts.
As I pointed out in one of my very first blog posts here (in German), smartphone videography still comes with a whole bunch of limitations (although some of them are slowly but surely going away or have at least been mitigated). Yet one central aspect of the fascinating philosophy behind phoneography (that’s the term I now prefer for referring to content creation with smartphones in general) has always been one of “can do” instead of “can’t do” despite the shortcomings. The spirit of overcoming obvious obstacles, going the extra mile to get something done, trailblazing new forms of storytelling despite not having all the bells and whistles of a whole multi-device or multi-person production environment seems to be a key factor. With this in mind I always found it a bit irritating and slightly “treacherous” to this philosophy when people proclaimed that video editing apps without the ability to have a second video track in the editing timeline are not suitable for storytelling. “YOU HAVE TO HAVE A VIDEO EDITOR WITH AT LEAST TWO VIDEO TRACKS!” Bam! If you are just starting out creating your first videos you might easily be discouraged if you hear such a statement from a seasoned video producer. Now let me just make one thing clear before digging a little deeper: I’m not saying having two (or multiple) video tracks in a video editing app as opposed to just one isn’t useful. It most definitely is. It enables you to do things you can’t or can’t easily do otherwise. However, and I can’t stress this enough, it is by no means a prerequisite for phoneography storytelling – in my very humble opinion, that is.
I can see why someone would support the idea of having two video tracks as being a must for creating certain types of videography work. For instance it could be based on the traditional concept of a news report or documentary featuring one or more persons talking (most often as part of an interview) and you don’t want to have the person talking occupying the frame all the time but still keep the statement going. This can help in many ways: On a very basic level, it can work as a means for visual variety to reduce the amount of “talking heads” air time. It might also help to cover up some unwanted visual distraction like when another person stops to look at the interviewee or the camera. But it can also exemplify something that the person is talking about, creating a meaningful connection. If you are interviewing the director of a theater piece who talks about the upcoming premiere you could insert a short clip showing the theater building from the outside, a clip of a poster announcing the premiere or a clip of actors playing a scene during the rehearsal while the director is still talking. The way you do it is by adding the so-called “b-roll” clip as a layer to the primary clip in the timeline of the editing app (usually muting the audio of the b-roll or at least reducing the volume). Without a second video track it can be difficult or even impossible to pull off this mix of video from one clip with the audio from another. But let’s stop here for a moment: Is this really the ONLY legitimate way to tell a story? Sure, as I just pointed out, it does have merit and can be a helpful tool – but I strongly believe that it’s also possible to tell a good story without this “trick” – and therefore without the need for a second video track. Here are some ideas:
Most of us have probably come across the strange acronym WYSIWYG: “What you see is what you get” – it’s a concept from computational UI design where it means that the preview you are getting in a (text/website/CMS) editor will very much resemble the way things actually look after creating/publishing. If you want a word to appear bold in your text and it’s bold after marking it in the editor, this is WYSIWYG. If you have to punch in code like <b>bold</b> into your text editing interface to make the published end result bold, that’s not WYSIWYG. So I dare to steal this bizarre acronym in a slightly altered version and context: WYSIWYH – “What you see is what you hear” – meaning that your video clips always have the original sound. So in the case of an interview like described before, using a video editing app with only one video track, you would either present the interview in one piece (if it’s not very long) or cut it into smaller chunks with “b-roll” footage in between rather than overlaid (if you don’t want the questions included). Sure, it will look or feel a bit different, not “traditional”, but is that bad? Can’t it still be a good video story? One fairly technical problem we might encounter here is getting smooth audio transitions between clips when the audio levels of the two clips are very different. Video editing apps usually don’t have audio-only cross-fades (WHY is that, I ask!) and a cross-fade involving both audio AND video might not be the preferred transition of choice as most of the time you want to use a plain cut. There are ways to work around this however or just accept it as a stylistic choice for this way of storytelling.
Another very interesting way that results in a much easier edit without the need for a second video track (if any at all) but includes more pre-planning in advance for a shoot is the one-shot approach. In contrast to what many one-man-band video journalists do (using a tripod with a static camera), this means you need to be an active camera operator at the same time to catch different visual aspects of the scene. This probably also calls for some sort of stabilization solution like phone-internal OIS/EIS, a rig, a gimbal or at least a steady hand and some practice. Journalist Kai Rüsberg has been an advocate of this style and collected some good tips here (blog post is in German but Google Translate should help you getting the gist). As a matter of fact, there’s even a small selection of noticeable feature films created in such a (risky) manner, among them “Russian Ark” (2002) and “Viktoria” (2015). One other thing we need to take into consideration is that if there’s any kind of asking questions involved, the interviewer’s voice will be “on air” so the audio should be good enough for this as well. I personally think that this style can be (if done right!) quite fascinating and more visually immersive than an edited package with static separate shots but it poses some challenges and might not be suited for everybody and every job/situation. Still, doing something like that might just expand your storytelling capabilities by trying something different. A one-track video editing app will suffice to add some text, titles, narration, fade in/out etc.
A unique almagam of a traditional multi-clip approach and the one-shot method is a technique I called “shediting” in an earlier blog post. This involves a certain feature that is present in many native and some 3rd party camera apps: By pausing the recording instead of stopping it in between shots, you can cram a whole bunch of different shots into a single clip. Just like with one-shot, this can save you lots of time in the edit (sometimes things need to go really fast!) but requires more elaborate planning and comes with a certain risk. It also usually means that everything needs to be filmed within a very compact time frame and one location/area because in most cases you can’t close the app or have the phone go to sleep without actually stopping the recording. Nonetheless, I find this to be an extremely underrated and widely unknown “hack” to piece together a package on the go! Do yourself a favor and try to tell a short video story that way!
A way to tackle rough audio transitions (or bad/challenging sound in general) while also creating a sense of continuity between clips is to use a voice-over narration in post production, most mobile editors offer this option directly within the app and even if you happen to come across one that doesn’t (or like Videoshop, hides it behind a paywall) you can easily record a voice-over in a separate audio recording app and import the audio to your video editor although it’s a bit more of a hassle if you need to redo it when the timing isn’t quite right. One example could be splicing your interview into several clips in the timeline and add “b-roll” footage with a voice-over in between. Of course you should see to it that the voice-over is somewhat meaningful and not just redundant information or is giving away the gist / key argument of an upcoming statement of the interviewee. You could however build/rephrase an actual question into the voice-over. Instead of having the original question “What challenges did you experience during the rehearsal process?” in the footage, you record a voice-over saying “During the rehearsal process director XY faced several challenges both on and off the stage…” for the insert clip followed by the director’s answer to the question. It might also help in such a situation to let the voice-over already begin at the end of the previous clip and flow into the subsequent one to cover up an obvious change in the ambient sound of the different clips. Of course, depending on the footage, the story and situation, this might not always work perfectly.
Finally, with more and more media content being consumed muted on smartphones “on the go” in public, one can also think about having text and titles as an important narrative tool, particularly if there’s no interview involved (of course a subtitled interview would also be just fine!). This only works however if your editing app has an adequate title tool, nothing too fancy but at least covering the basics like control over fonts, size, position, color etc. (looking at you, iMovie for iOS!). Unlike adding a second video track, titles don’t tax the processor very much so even ultra-budget phones will be able to handle it.
Now, do you still remember the second part of this article’s title, the one in parentheses? I have just gone into lengths to explain why I think it’s not always necessary to use a video editing app with at least two video tracks to create a video story with your phone, so why would I now be saying that after all it doesn’t really matter that much anymore? Well, if you look back a whole bunch of years (say around 2013/2014) when the phoneography movement really started to gather momentum, the idea of having two video tracks in a video editing app was not only a theoretical question for app developers, thinking about how advanced they WANTED their app to be. It was also very much a plain technical consideration, particularly for Android where the processing power of devices ranged from quite weak to quite powerful. Processing multiple video streams in HD resolution simultaneously was no small feat at the time for a mobile processor, to a small degree this might even still be true today. This meant that not only was there a (very) limited selection of video editing apps with the ability to handle more than just one video track at the same time, but even when an app like KineMaster or PowerDirector generally supported the use of multiple video tracks, this feature was only available for certain devices, excluding phones and tablets with very basic processors that weren’t up to the task. Now this has very much changed over the last years with SoCs (System-on-a-chip) becoming more and more powerful, at least when it comes to handling video footage in FHD 1080p resolution as opposed to UHD/4K! Sure, I bet there’s still a handful of (old) budget Android devices out there that can’t handle two tracks of HD video in an editing app but mostly, having the ability to use at least two video tracks is not really tied to technical restraints anymore – if the app developers want their app to have multi-track editing then they should be able to integrate that. And you can definitely see that an increasing number of video editing apps have (added) this feature – one that’s really good, cross-platform and free without watermark is VN which I wrote about in an earlier article.
So, despite having argued that two video tracks in an editing app is not an absolute prerequisite for producing a good video story on your phone, the fact that nowadays many apps and basically all devices support this feature very much reduces the potential conflict that could arise from such an opinion. I do hope however that the mindset of the phoneography movement continues to be one of “can do” instead of “can’t do”, exploring new ways of storytelling, not just producing traditional formats with new “non-traditional” devices.
As usual, feel free to drop a comment or get in touch on the Twitter @smartfilming. If you like this blog, consider signing up for my Telegram channel t.me/smartfilming.