There are times when – for reasons of privacy or even a person’s physical safety – you want to make certain parts of a frame in a video unrecognizable so not to give away someone’s identity or the place where you shot the video. While it’s fairly easy to achieve something like that for a photograph, it’s a lot more challenging for video because of two reasons: 1) You might have a person moving around within a shot or a moving camera which constantly alters the location of the subject within the frame. 2) If the person talks, he or she might also be identifiable just by his/her voice. So are there any apps that help you to anonymize persons or objects in videos when working on a smartphone?
KineMaster – the best so far
Up until recently the best app for anonymizing persons and/or certain parts of a video in general was KineMaster which I already praised in my last blog about the best video editing apps on Android (it’s also available for iPhone/iPad). While it’s possible to use just any video editor that allows for a resizable image layer (let’s say just a plain black square or rectangle) on top of the main track to cover a face, KineMaster is the only one with a dedicated blur/mosaic tool for this use case. Many other video editing apps have a blur effect in their repertoire, but the problem is that this effect always affects the whole image and can’t be applied to only a part of the frame. KineMaster on the other hand allows its Gaussian Blur effect to be adjusted in size and position within the frame. To access this feature, scroll to the part of the timeline where you want to apply the effect but don’t select any of the clips! Now tap on the “Layer” button, choose “Effect”, then “Basic Effects”, then either “Gaussian Blur” or “Mosaic”. An effect layer gets added to the timeline which you can resize and position within the preview window. Even better: KineMaster also lets you keyframe this layer which is incredibly important if the subject/object you want to anonymize is moving around the frame or if the camera is moving (thereby constantly altering the subject’s/object’s position within the frame). Keyframing means you can set “waypoints” for the effect’s area to automatically change its position/size over time. You can access the keyframing feature by tapping on the key icon in the left sidebar. Keyframes have to be set manually so it’s a bit of work, particularly if your subject/object is moving a lot. If you just have a static shot with the person not moving around a lot, you don’t have to bother with keyframing though. And as if the adjustable blur/mosaic effect and support for keyframing wasn’t good enough, KineMaster also gives you a tool to add an extra layer of privacy: you can alter voices. To access this feature, select a clip in the timeline and then scroll down the menu on the right to find “Voice Changer”, there’s a whole bunch of different effects. To be honest, most of them are rather cartoonish – I’m not sure you want your interviewee to sound like a chipmunk. But there are also a couple of voice changer effects that I think can be used in a professional context.
What happened to Censr?
As I indicated in the paragraph above, a moving subject (or a moving camera) makes anonymizing content within a video a lot harder. You can manually keyframe the blurred area to follow along in KineMaster but it would be much easier if that could be done via automatic tracking. Last summer, a closed beta version of an app called “Censr” was released on iOS, the app was able to automatically track and blur faces. It all looked quite promising (I saw some examples on Twitter) but the developer Sam Loeschen told me that “unfortunately, development on censr has for the most part stopped”.
PutMask – a new app with a killer feature!
But you know what? There actually is a smartphone app out there that can automatically track and pixelate faces in a video: it’s called PutMask and currently only available for Android (there are plans for an iOS version). The app (released in July 2020) offers three ways of pixelating faces in videos: automatically by face-tracking, manually by following the subject with your finger on the touch-screen and manually by keyframing. The keyframing option is the most cumbersome one but might be necessary when the other two ways won’t work well. The “swipe follow” option is the middle-ground, not as time-consuming as keyframing but manual action is still required. The most convenient approach is of course automatic face-tracking (you can even track multiple faces at the same time!) – and I have to say that in my tests, it worked surprisingly well!
Does it always work? No, there are definitely situations in which the feature struggles. If you are walking around and your face gets covered by something else (for instance because you are passing another person or an object like a tree) even for only a short moment, the tracking often loses you. It even lost me when I was walking around indoors and the lens flare from the light bulb at the ceiling created a visual “barrier” which I passed at some point. And although I would say that the app is generally well-designed, some of the workflow steps and the nomenclature can be a bit confusing. Here’s an example: After choosing a video from your gallery, you can tap on “Detect Faces” to start a scanning process. The app will tell you how many faces it has found and will display a numbered square around the face. If you now tap on “Start Tracking”, the app tells you “At least select One filter”. But I couldn’t find a button or something indicating a “filter”. After some confusion I discovered that you need to tap once on the square that is placed over the face in the image, maybe by “filter” they actually mean you need to select at least one face? Now you can initiate the tracking. After the process is finished you can preview the tracking that the app has done (and also dig deeper into the options to alter the amount of pixelation etc.) but for checking the actual pixelated video you have to export your project first. While the navigation could/should be improved for certain actions to make it more clear and intuitive, I was quite happy with the results in general. The biggest catch until recently was the maximum export resolution of 720p but with the latest update released on 21 January 2021, 1080p is also supported. An additional feature that would be great to have in an app that has a dedicated focus on privacy and anonymization, is the ability to alter/distort the voice of a person, like you can do in KineMaster.
There’s one last thing I should address: The app is free to download with all its core functionality but you only get SD resolution and a watermark on export. For 720p watermark-free export, you need to make an in-app purchase. The IAP procedure is without a doubt the weirdest I have ever encountered: The app tells you to purchase any one of a selection of different “characters” to receive the additional benefits. Initially, these “characters” are just names in boxes, “Simple Man”, “Happy Man”, “Metal-Head” etc. If you tap on a box, an animated character pops up. But only when scrolling down it becomes clear that these “characters” represent different amounts of payment with which you support the developer. And if that wasn’t strange enough by itself, the amount you can donate goes up to a staggering 349.99 USD (Character Dr. Plague) – no kidding! At first, I had actually selected Dr. Plague because I thought it was the coolest looking character of the bunch. Only when trying to go through with the IAP did I become aware of the fact that I was about to drop 350 bucks on the app! Seriously, this is nuts! I told the developer that I don’t think this is a good idea. Anyway, the amount of money you donate doesn’t affect your additional benefits, so you can just opt for the first character, the “Simple Man”, which costs you 4.69€. I’m not sure why they would want to make things so confusing for users willing to pay but other than that, PutMask is a great new app with a lot of potential, I will definitely keep an eye on it!
As always, if you have questions or comments, drop them below or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter about important things that happened in the world of mobile video.
A couple of years ago, 360° (video) cameras burst onto the scene and seemed to be all the new rage for a while. The initial excitement faded relatively quickly however when producers realized that this kind of video didn’t really resonate as much as they thought it would with the public – at least in the form of immersive VR (Virtual Reality) content for which you need extra hardware, hardware that most didn’t bother to get or didn’t get hooked on. From a creator’s side, 360 video also involved some extra and – dare I say – fairly tedious workflow steps to deliver the final product (I have one word for you: stitching). That’s not to say that this extraordinary form of video doesn’t have value or vanished into total obscurity – it just didn’t become a mainstream trend.
Among the companies that heavily invested in 360 cameras was Shenzen-based Insta360. They offered a wide variety of different devices: Some standalone, some that were meant to be physically connected to smartphones. I actually got the Insta360 Air for Android devices and while it was not a bad product at all and fun for a short while, the process of connecting it to the USB port of the phone when using it but then taking it off again when putting the phone back in your pocket or using it for other things quickly sucked out the motivation to keep using it.
Repurposing 360 video
While continuing to develop new 360 cameras, Insta360 however realized that 360 video could be utilized for something else than just regular 360 spherical video: Overcapture and subsequent reframing for “traditional”, “flat” video. What does this mean in plain English? Well, the original spherical video that is captured is much bigger in terms of resolution/size than the one that you want as a final product (for instance classic 1920×1080) which gives you the freedom to choose your angle and perspective in post production and even create virtual camera movement and other cool effects. Insta360 by no means invented this idea but they were clever enough to shift their focus towards this use case. Add to that the marketing gold feature of the “invisible selfie-stick” (taking advantage of a dual-lens 360 camera’s blindspot between its lenses), brilliant “Flow State” stabilization and a powerful mobile app (Android & iOS) full of tricks, you’ll end up with a significant popularity boost for your products!
The One X and the wait for a true successor
The one camera that really proved to be an instant and long-lasting success for Insta360 was the One X which was released in 2018. A very compact & slick form factor, ease of use and very decent image quality (except in low light) plus the clever companion app breathed some much-needed life into a fairly wrinkled and deflated 360 video camera balloon. In early 2020 (you know, the days when most of us still didn’t know there was a global pandemic at our doorstep), Insta360 surprised us by not releasing a direct successor to everybody’s darling (the One X) but the modular One R, a flexible and innovative but slightly clunky brother to the One X. It wasn’t until the end of October that Insta360 finally revealed the true successor to the One X, the One X2.
In the months prior to the announcement of the One X2, I had actually thought about getting the original One X (I wasn’t fully convinced by the One R) but it was sold out in most places and there were some things that bothered me about the camera. To my delight, Insta360 seemed to have addressed most of the issues that me (and obviously many others) had with the original One X: They improved the relatively poor battery life by making room for a bigger battery, they added the ability to connect an external mic (both wirelessly through Bluetooth and via the USB-C port), they included a better screen on which you could actually see things and change settings in bright sunlight, they gave you the option to stick on lens guards for protecting the delicate protruding lenses and they made it more rugged including an IPX8 waterproof certification (up to 10m) and a less flimsy thread for mounting it to a stick or tripod. All good then? Not quite. Just by looking at the spec sheet, people realized that there wasn’t any kind of upgrade in terms of video resolution or even just frame rates. It’s basically the same as the One X. It maxes out at 5.7k (5760×2880) at 30fps (with options for 25 and 24), 4k at 50fps and 3k at 100fps. The maximum bitrate is 125 Mbit/s. I’m sure quite a few folks had hoped for 8k (to get on par with the Kandao Qoocam 8K) or at the very least a 50/60fps option for 5.7k. Well, tough luck.
While I can certainly understand some of the frustration about the fact that there hasn’t been any kind of bump in resolution or frame rates in 2 years, putting 8K in such a small device and also have the footage work for editing on today’s mobile devices probably wasn’t a step Insta360 was ready to take because of the possibility of a worse user experience despite higher resolution image quality. Personally, I wasn’t bothered too much by this since the other hardware improvements over the One X were good enough for me to go ahead and make the purchase. And this is where my own frustrations began…
Insta360 & me: It’s somewhat difficult…
While I was browsing the official Insta360 store to place my order for the One X2, I noticed a pop-up that said that you could get 5% off your purchase if you sign up for their newsletter. They did exclude certain cameras and accessories but the One X2 was mentioned nowhere. So I thought, “Oh, great! This just comes at the right time!”, and signed up for the newsletter. After getting the discount code however, entering it during the check-out always returned a “Code invalid” error message. I took to Twitter to ask them about this – no reply. I contacted their support by eMail and they eventually and rather flippantly told me something like “Oh, we just forgot to put the X2 on the exclusion list, sorry, it’s not eligible!”. Oh yes, me and the Insta360 support were off to a great start!
Wanting to equip myself with the (for me) most important accessories I intended to purchase a pair of spare batteries and the microphone adapter (USB-C to 3.5mm). I could write a whole rant about how outrageous I find the fact that literally everyone seems to make proprietary USB-C to 3.5mm adapters that don’t work with other brands/products. E-waste galore! Anyway, there’s a USB-C to 3.5mm microphone adapter from Insta360 available for the One R and I thought, well maybe at least within the Insta360 ecosystem, there should be some cross-device compatibility. Hell no, they told me the microphone adapter for the One R doesn’t work with the One X2. Ok, so I need to purchase the more expensive new one for the X2 – swell! But wait, I can’t because while it’s listed in the Insta360 store, it’s not available yet. And neither are extra batteries. The next bummer. So I bought the Creator Kit including the “invisible” selfie-stick, a small tripod, a microSD card, a lens cap and a pair of lens guards.
A couple of weeks later, the package arrived – no problem, in the era of Covid I’m definitely willing to cut some slack in terms of delivery times and the merchandise is sent from China so it has quite a way to Germany. I opened the package, took out the items and checked them to see if anything’s broken. I noticed that one of the lens guards had a small blemish/scratch on it. I put them on the camera anyway thinking maybe it doesn’t really show in the footage. Well, it did. A bit annoying but stuff like that happens, a lemon. I contacted the support again. They wanted me to take a picture of the affected lens guard. Okay. I sent them the picture. They blatantly replied that I should just buy a new one from their store, basically insinuating that it was me who damaged the lens guard. What a terrible customer service! I suppose I would have mustered up some understanding for their behaviour if I had contacted them a couple of days or weeks later after actually using the X2 for some time outdoors where stuff can quickly happen. But I got in touch with them the same day the delivery arrived and they should have been able to see that since the delivery had a tracking number. Also, this item costs 25 bucks in the Insta360 store, probably a single one or a few cents in production and I wasn’t even asking about a pair but only one – why make such a fuss about it? So there was some back-and-forth and only after I threatened to return the whole package and asked for a complete refund they finally agreed to send me a replacement pair of lens guards at no extra cost. On a slightly positive note, they did arrive very quickly only a couple of days later.
Is the Insta360 One X2 actually a good camera?
So what an excessive prelude I have written! What about the camera itself? I have to admit that for the most part, it’s been a lot of fun so far after using it for about a month. The design is rugged yet still beautifully simplistic and compact, the image quality in bright, sunny conditions is really good (if you don’t mind that slightly over-sharpened wide-angle look and that it’s still “only” 5.7k – remember this resolution is for the whole 360 image so it’s not equivalent to a 5.7k “flat” image), the stabilization is generally amazing (as long as the camera and its sensor are not exposed to extreme physical shakes which the software stabilization can’t compensate for) and the reframing feature in combination with the camera’s small size and weight gives you immense flexibility in creating very interesting and extraordinary shots.
Sure, it also has some weaknesses: Despite having a 5.7k 360 resolution, if you want to export as a regular flat video, you are limited to 1080p. If you need your final video to be in UHD/4K non-360 resolution, this camera is not for you. The relatively small sensor size (I wasn’t able to find out the exact size for the X2 but I assume it’s the same as the One X, 1/2.3″) makes low-light situations at night or indoors a challenge despite a (fixed) aperture of f/2.0 – even a heavily overcast daytime sky can prove less than ideal. Yes, a slightly bigger sensor compared to its predecessors would have been welcome. The noticeable amount of image noise that is introduced by auto-exposure in such dim conditions can be reduced by exposing manually (you can set shutter speed and ISO) but then of course you just might end up with an image that’s quite dark. The small sensor also doesn’t allow for any fancy “cinematic” bokeh but in combination with the fixed focus it also has an upside that shouldn’t be underestimated for self-shooters: You don’t have to worry about a pulsating auto-focus or being out of focus as everything is always in focus. You can also shoot video in LOG (flatter image for more grading flexibility) and HDR (improved dynamic range in bright conditions) modes. Furthermore, there’s a dedicated non-360 video mode with a 150 degree field-of-view but except for the fact that you get a slight bump in resolution compared to flat reframed 360 video (1440p vs. 1080p) and smaller file sizes (you can also shoot your 5.7k in H.265 codec to save space), I don’t see me using this a lot as you lose all the flexibility in post.
While it’s good that all the stitching is done automatically and the camera does a fairly good job, it’s not perfect and you should definitely familiarize yourself with where the (video) stitchline goes to avoid it in the areas where you capture important objects or persons, particularly faces. As a rule of thumb when filming yourself or others you should always have one of the two lenses pointed towards you/the person and not face the side of the camera. It’s fairly easy to do if you usually have the camera in the same position relative to yourself but becomes more tricky when you include elaborate camera movements (which you probably will as the X2 basically invites you to do this!).
Regarding the audio, the internal 4-mic ambi-sonic set up can produce good results for ambient sound, particularly if you have the camera close to the sound source like when you have it on a stick pointing down and you are walking over fresh snow, dead leaves, gravel etc. For recording voices in good quality, you also need to be pretty close to the camera’s mics, having it on a fully extended selfie-stick isn’t ideal. If you want to use the X2 on an extended stick and talk to the camera you should use an external mic, either one that is directly connected to the camera or plugged into an external recorder, then having to sync audio and video later in post. As I have mentioned before, the X2 now does offer support for external mics via the USB-C charging port with the right USB-C-to-3.5mm adapter and also via Bluetooth. Insta360 highlights in their marketing that you can use Apple’s AirPods (Pro) but you can also other mics that work via Bluetooth. The audio sample rate of Bluetooth mics is currently limited to 16kHz by standard but depending on the used mic you can get decent audio. I’ll probably make a separate article on using external mics with the X2 once my USB-C to 3.5mm adapter arrives. Wait, does the X2 shoot 360 photos as well? Of course it does, they turn out quite decent, particularly with the new “Pure Shot” feature and the stichting is better than in video mode. It’s no secret though that the X2 has a focus on video with all its abilities and for those that mainly care about 360 photography for virtual tours etc., the offerings in the Ricoh Theta line will probably be the better choice.
The Insta360 mobile app
The Insta360 app (Android & iOS) might deserve its own article to get into detail but suffice it to say that while it can seem a bit overwhelming and cluttered occasionally and you also still experience glitches now and then, it’s very powerful and generally works well. Do note however that if you want to export in full 5.7k resolution as a 360 video you have to transfer the original files to a desktop computer and work with them in the (free) Insta360 Studio software (Windows/macOS) as export from the mobile app is limited to 4K. You should also be aware of the fact that neither the mobile app nor the desktop software works as a fully-fledged traditional video editor for immersive 360 video where you can have multiple clips on a timeline and arrange them for a story. In the mobile app, you do get such an editing environment (“Stories” – “My Stories” – “+ Create a story”) but while you can use your original spherical 360 footage here, you can only export the project as a (reframed) flat video (max resolution 2560×1440). If you need your export to be an actual 360 video with according metadata, you can only do this one clip at a time outside the “Stories” editing workspace. But as mentioned before, Insta360 focuses on the reframing of 360 video with its cameras and software, so not too many people might be bothered by that. One thing that really got on my nerves while editing within the app on an iPad: When you are connected to the X2 over WiFi, certain parts of the app that rely on a data connection don’t work, for instance you are not able to browse all the features of the shot lab (only those that have been cached before) or preview/download music tracks for the video. This is less of a problem on a phone where you still can have a mobile data connection while using a WiFi connection to the X2 (if you don’t mind using up mobile data) but on an iPad or any device that doesn’t have an alternative internet connection, it’s quite annoying. You have to download the clip, then disconnect from the X2, re-connect to your home WiFi and then download the track to use.
Who is the One X2 for?
Well, I’d say that it can be particularly useful for solo-shooters and solo-creators for several reasons: Most of all you don’t have to worry much about missing something important around you while shooting since you are capturing a 360 image and can choose the angle in post (reframing/keyframed reframing) if you export as a regular video. This can be extremely useful for scenarios where there’s a lot to see or happening around you, like if you are travel-vlogging from interesting locations or are reporting from within a crowd – or just generally if you want to do a piece-to-camera but also show the viewer what you are looking at the same moment. Insta360’s software stabilization is brilliant and comparable to a gimbal and the “invisible” selfie-stick makes it look like someone else is filming you. The stick and the compact form of the camera also lets you move the camera to places that seem impossible otherwise. With the right technique you can even do fake “drone” shots. Therefore it also makes sense to have the X2 in your tool kit just for special shots, even if you neither are a vlogger, a journalist nor interested in “true” 360 video.
A worthy upgrade from the One X / One R?
Should you upgrade if you have a One X or One R? Yes and no. If you are happy with the battery life of the One X or the form factor of the One R and were mainly hoping for improved image quality in terms of resolution / higher frame rates, then no, the One X2 does not do the trick, it’s more of a One X 1.5 in some ways. However, if you are bothered by some “peripheral” issues like poor battery life, very limited functionality of the screen/display, lack of external microphone support (One X) or the slightly clunky and cumbersome form factor / handling (One R) and you are happy with a 5.7k resolution, the X2 is definitely the better camera overall. If you have never owned a 360 (video) camera, this is a great place to start, despite its quirks – just be aware that Insta360’s support can be surprisingly cranky and poor in case you run into any issues.
As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter about important things that happened in the world of mobile video.
I usually don’t follow the stats for my blog but when I recently did check on what articles have been the most popular so far, I noticed that a particular one stuck out by a large margin and that was the one on using external microphones with Android devices. So I thought if people seem to be interested in that, why not make an equivalent for iOS, that is for iPhones? So let’s jump right into it.
First things first: The Basics
A couple of basic things first: Every iPhone has a built-in microphone for recording video that, depending on the use case, might already be good enough if you can position the phone close to your talent/interviewee. Having your mic close to the sound source is key in every situation to get good audio! As a matter of fact, the iPhone has multiple internal mics and uses different ones for recording video (next to the lens/lenses) and pure audio (bottom part). When doing audio-only for radio etc., it’s relatively easy to get close to your subject and get good results. It’s not the best way when recording video though if you don’t want to shove your phone into someone’s face. In this case you can and should significantly improve the audio quality of your video by using an external mic connected to your iPhone – never forget that audio is very important! While the number of Android phone makers that support the use of external mics within their native camera app is slowly growing, there are still many (most?) Android devices out there that don’t support this for the camera app that comes with the phone (it’s possible with basically every Android device if you use 3rd party camera apps though!). You don’t have to worry about this when shooting with the native camera app of an iPhone. The native camera app will recognize a connected external mic automatically and use it as the audio input when recording video. When it comes to 3rd party video recording apps, many of them like Filmic Pro, MoviePro or Mavis support the use of external mics as well but with some of them you have to choose the audio input in the settings so definitely do some testing before using it the first time on a critical job. Although I’m looking at this from a videographer’s angle, most of what I am about to elaborate on also applies to recording with audio recording apps. And in the same way, when I say “iPhone”, I could just as well say “iPad” or “iPod Touch”. So there are basically three different ways of connecting an external mic to your iPhone: via the 3.5mm headphone jack, via the Lightning port and via Bluetooth (wireless).
3.5mm headphone jack & adapter
With all the differences between Android and iOS both in terms of hardware and software, the 3.5mm headphone jack was, for a while, a somewhat unifying factor – that was until Apple decided to drop the headphone jack for the iPhone 7 in 2016. This move became a wildly debated topic, surely among the – let’s be honest – comparatively small community of mobile videographers and audio producers relying on connecting external mics to their phones but also among more casual users because they couldn’t just plug in their (often very expensive) headphones to their iPhone anymore. While the first group is definitely more relevant for readers of this blog, the second was undoubtedly responsible for putting the issue on the public debate map. Despite the considerable outcry, Apple never looked back. They did offer a Lightning-to-3.5mm adapter – but sold it separately. I’m sure they have been making a fortune since, don’t ask how many people had to buy it more than once because they lost, displaced or broke the first one. A whole bunch of Android phone makers obviously thought Apple’s idea was a progressive step forward and started ditching the headphone jack as well, equipping their phones only with a USB-C port. Unlike with Apple however, the consumer still had the choice to choose a new phone that had a headphone jack and in a rather surprising turn of events, some companies like Huawei and Google actually backtracked and re-introduced the headphone jack, at least for certain models. Anyway, if you happen to have an older iPhone (6s and earlier) you can still use the wide variety of external microphones that can be connected via the 3.5mm headphone jack without worrying much about adapters and dongles.
While most Android users probably still have fairly fresh memories of a different charging port standard (microUSB) from the one that is common now (USB-C), only seasoned iPhone aficionados will remember the days of the 30-pin connector that lasted until the iPhone 5 introduced the Lightning port as a new standard in 2012. And while microUSB mic solutions for Android could be counted on one hand and USB-C offerings took forever to become a reality, there were dedicated Lightning mics even before Apple decided to kill the headphone jack. The most prominent one and a veritable trailblazer was probably IK Multimedia’s iRig Mic HD and its successor, the iRig Mic HD 2. IK Multimedia’s successor to the iRigPre, the iRigPre HD comes with a Lightning cable as well. But you can also find options from other well-known companies like Zoom (iQ6, iQ7), Shure (MV88/MV88+), Sennheiser (HandMic Digital, MKE 2 Digital), Rode (Video Mic Me-L), Samson (Go Mic Mobile) or Saramonic (Blink 500). The Saramonic Blink 500 comes in multiple variations, two of them specifically targeted at iOS users: the Blink 500 B3 with one transmitter and the B4 with two transmitters. The small receiver plugs right into the Lightning port and is therefore an intriguingly compact solution, particularly when using it with a gimbal. Saramonic also has the SmartRig Di and SmartRig+ Di audio interfaces that let you connect one or two XLR mics to your device. IK Multimedia offers two similar products with the iRig Pro and the iRig Pro Duo. Rode recently released the USB-C-to-Lightning patch cable SC15 which lets you use their Video Mic NTG (which comes with TRS/TRRS cables) with an iPhone. There’s also a Lightning connector version of the SC6 breakout box, the SC6-L which lets you connect two smartLavs or TRRS mics to your phone. I have dropped lots of product names here so far but you know what? Even if you don’t own any of them, you most likely already have an external mic at hand: Of course I’m talking about the headset that comes included with the iPhone! It can’t match the audio quality of other dedicated external mics but it’s quite solid and can come in handy when you have nothing else available. One thing you should keep in mind when using any kind of microphone connected via the iPhone’s Lightning port: unless you are using a special adapter with an additional charge-through port, you will not be able to charge your device at the same time like you can/could with older iOS devices that had a headphone jack.
I have mentioned quite a few wireless systems before (Rode Wireless Go, Saramonic Blink 500/Blink 500 Pro, Samson Go Mic Mobile) that I won’t list here (again) for one reason: While the TX/RX system of something like the Rode Wireless Go streams audio wirelessly between its units, the receiver unit (RX) needs to be connected to the iPhone via a cable or (in the case of the Blink 500) at least a connector. So strictly speaking it’s not really wireless when it comes to how the audio signal gets into the phone. Now, are there any ‘real’ wireless solutions out there? Yes, but the technology hasn’t evolved to a standard that can match wired or semi-wired solutions in terms of both quality and reliability. While there could be two ways of wireless audio into a phone (wifi and Bluetooth), only one (Bluetooth) is currently in use for external microphones. This is unfortunate because the Bluetooth protocol that is used for sending audio back from an external accessory to the phone (the so-called Hands Free Profile, HFP) is limited to a sample rate of 16kHz (probably because it was created with headset phone calls in mind). Professional broadcast audio usually has a sample rate of 44.1 or 48kHz. That doesn’t mean that there aren’t any situations in which using a Bluetooth mic with its 16kHz limitation can actually be good enough. The Instamic was primarily designed to be a standalone ultra-compact high quality audio recorder which records 48/96 kHz files to its internal 8GB storage but can also be used as a truly wireless Bluetooth mic in HFP mode. The 16kHz audio I got when recording with Filmic Pro (here’s a guide on how to use the Instamic with Filmic Pro) was surprisingly decent. This probably has to do with the fact that the Instamic’s mic capsules are high quality unlike with most other Bluetooth mics. One maybe unexpected option is to use Apple’s AirPods/AirPods Pro as a wireless Bluetooth mic input. According to BBC Mobile Journalism trainer Marc Blank-Settle, the audio from the AirPods Pro is “good but not great”. He does however point out that in times of Covid-19, being able to connect to other people’s AirPods wirelessly can be a welcome trick to avoid close contact. Another interesting wireless solution comes from a company called Mikme. Their microphone/audio recorder works with a dedicated companion video recording app via Bluetooth and automatically syncs the quality audio (44.1, 48 or 96kHz) to the video after the recording has been stopped. By doing this, they work around the 16kHz Bluetooth limitation for live audio streaming. While the audio quality itself seems to be great, the somewhat awkward form factor and the fact that it only works with its best feature in their own video recording app but not other camera apps like Filmic Pro, are noticeable shortcomings (you CAN manually sync the Mikme’s audio files to your Filmic or other 3rd party app footage in a video editor). At least regarding the form factor they have released a new version called the Mikme Pocket which is more compact and basically looks/works like a transmitter with a cabled clip-on lavalier mic. One more important tip that applies to all the aforementioned microphone solutions: If you are shooting outdoors, always have some sort of wind screen / wind muff for your microphone with you as even a light breeze can cause noticeable noise.
Looking into the nearby future, some fear that Apple might be pulling another “feature kill” soon, dropping the Lightning port as well and thereby eliminating all physical connections to the iPhone. While there are no clear indications that this is actually imminent, Apple surely would be the prime suspect to push this into the market. If that really happens however, it will be a considerable blow to iPhone videographers as long as there’s no established high-quality and reliable wireless standard for external mics. Oh well, there’s always another mobile platform to go to if you’re not happy with iOS anymore 😉
To wrap things up, I have asked a couple of mobile journalists / content creators using iPhones what their favorite microphone solution is when recording video (or audio in general):
Wytse Vellinga (Mobile Storyteller at Omrop Fryslân, The Netherlands): “When I am out shooting with a smartphone I want high quality worry-free audio. That is why I prefer to use the well-known brands of microphones. Currently there are three microphones I use a lot. The Sennheiser MKE200, the Rode Wireless Go and the Mikme Pocket. The Sennheiser is the microphone that is on the phone constantly when taking shots and capturing the atmospheric sound and short sound bites from people. For longer interviews I use the wireless microphones from Mikme and Rode. They offer me freedom in shooting because I don’t have to worry about the cables.”
Philip Bromwell (Digital Native Content Editor at RTÉ, Ireland): “My current favourite is the Rode Wireless Go. Being wireless, it’s a very flexible option for recording interviews and gathering localised nat sound. It has proven to be reliable too, although the original windshield was a weakness (kept detaching).”
Nick Garnett (BBC Reporter, England & the world): “The mic I always come back to is the Shure MV88+ – not so much for video – but for audio work: it uses a non proprietary cable – micro usb to lightning. It allows headphones to plug into the bottom and so I can use it for monitoring the studio when doing a live insert and the mic is so small it hides in my hand if I have to be discrete. For video work? Rode VideoMicro or the Boya clone. It’s a semi-rifle, it comes with a deadcat and an isolation mount and it costs €30 … absolute bargain.”
Neal Augenstein (Radio Reporter at WTOP Washington DC, USA): “If I’m just recording a one-on-one interview, I generally use the built-in microphone of the iPhone, with a foam windscreen. I’ve yet to find a microphone that so dramatically improves the sound that it merits carrying it around. In an instance where someone’s at a podium or if I’m shooting video, I love the Rode Wireless Go. Just clipping it on the podium, without having to run cable, it pairs automatically, and the sound is predictably good. The one drawback – the tiny windscreen is tough to keep on.”
Nico Piro (Special Correspondent for RAI, Italy & the world): “To record ambient audio (effects or natural as you want to name it) I use a Rode Video Mic Go (light, no battery needed, perfect for both phones and cameras) even if I must say that the iPhone’s on-board mic performs well, too. For Facebook live I use a handheld mic by Polsen, designed for mobile, it is reliable and has a great cardioid pickup pattern. When it comes to interviews, the Rode Wireless Go beats everything for its compact dimensions and low weight. When you are recording in big cites like New York and you are worried about radio interferences the good old cabled mics are always there to help, so Rode’s SmartLav+ is a very good option. I’m also using it for radio production and I am very sad that Rode stopped improving its Rode Rec app which is still good but stuck in time when it comes to file sharing. Last but not least is the Instamic. It takes zero space and it is super versatile…if you use native camera don’t forget to clap for sync!”
Bianca Maria Rathay (Freelance iPhone videographer, Germany): “My favorite external microphone for the iPhone is the RODE Wireless Go in combination with a SmartLav+ (though it works on its own also). The mic lets your interviewee walk around freely, works indoors as well as outdoors and has a full sound. Moreover it is easy to handle and monitor once you have all the necessary adapters in place and ready.”
Leonor Suarez (TV Journalist and News Editor at RTPA, Spain): “My favorite microphone solutions are: For interviews: Rode Rodelink Filmmaker Kit. It is reliable, robust and has a good quality-price relationship. I’ve been using it for years with excellent results. For interviews on the go, unexpected situations or when other mics fail: IK Multimedia iRig Mic Lav. Again, good quality-price relationship. I always carry them with me in my bag and they have allowed me to record interviews, pieces to camera and unexpected stories. What I also love is that you can check the audio with headphones while recording.”
Marcel Anderwert (Mobile Journalist at SRF, Switzerland): “For more than a year, I have been shooting all my reports for Swiss TV with one of these two mics: Voice Technologies’ VT506Mobile (with it’s long cable) or the Rode Wireless Go, my favourite wireless mic solution. The VT506Mobile works with iOS and Android phones, it’s a super reliable lavalier and the sound quality for interviews is just great. Rode’s Wireless Go gives me more freedom of movement. And it can be used in 3 ways: As a small clip-on mic with inbuilt transmitter, with a plugged in lavalier mic – and in combination with a simple adapter even as a handheld mic.”
As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter about important things that happened in the world of mobile video.
One of the things that has mostly remained a blindspot in video recording with the native camera app of a smartphone, is the ability to shoot in PAL frame rates, i.e. 25/50fps. The native camera apps of smartphones usually record with a frame rate of 30/60 fps. This is fine for many use cases but it’s not ideal under two circumstances: a) if you have to deliver your video for traditional professional broadcast in a PAL broadcast standard region (Europe, Australia, parts of Africa, Asia, South America etc.) b) If you have a multi-camera shoot with dedicated ‘regular’ cameras that only shoot 25/50fps. Sure, it’s relatively easy to capture in 25fps on your phone by using a 3rd party app like Filmic Pro or Protake but it still would be a welcome addition to any native camera app as long as this silly global frame rate divide (don’t get me started on this!) continues to exist. There was actually a prominent example of a phone maker that offered 25fps as a recording option in their (quasi)-native camera app very early on: Nokia and later Microsoft on their Lumia phones running Windows Phone / Windows Mobile. But as we all know by now, Windows Phone / Windows Mobile never really stood a chance against Android and iOS (read about its potential here) and has all but disappeared from the smartphone market. When LG introduced its highly advanced manual video mode in the native camera app of the V10, I had high hopes they would include a 25/50fps frame rate option as they were obviously aiming at more ambitious videographers. But no, the years have passed and current offerings from the Korean company like the G8X, V60 and Wing still don’t have it. It’s probably my only major gripe with LG’s otherwise outstanding flagship camera app. It was up to Sony to rekindle the flame, giving us 25fps natively in the pro camera app of the Xperia 1 II earlier this year.
As always, feel free to comment here or hit me up on the Twitter @smartfilming. If you like this blog post, do consider subscribing to my Telegram channel to get notified about new blog posts and also receive my Ten Telegram Takeaways newsletter including 10 interesting things that happened during the past four weeks in the world of mobile content creation/tech.
I’ve been thinking about getting my first full-frame DSLM camera for some time now, there are a whole lot of very tempting offerings out there. Not one however was able to tick all the boxes that are most important for me – including excellent auto-focus, great battery life, no recording limit and a price tag of around 2k. Very recently, Sony announced the Alpha 7c, Sony’s smallest full-frame camera so far. While the A7c recycles a lot of established components from earlier Sony cameras and received quite a bit of flak for that (same sensor! no 4K60! no 10bit!), it did include some minor improvements over the Alpha 7 III that might actually be a major deal for some: a fully articulating screen, eye-tracking auto focus for video and unlimited recording. On the other hand, reviewers found that the in-body image stabilization via sensor shift (IBIS) was curiously worse than that of the A7 III.
While watching some A7c-related videos on YouTube a few days ago, I stumbled upon a very interesting video by Gordon Laing though:
He reveals that the A7c has a “hidden” feature that relates to video stabilization. I say “hidden” because Sony for whatever reason didn’t bother to mention it at all when promoting its latest camera release, totally focussing on its small form factor. So Sony’s A7c has an inbuilt gyroscope sensor that records metadata about the camera’s whereabouts in 3D space when filming, so basically every shake you make leaves a metadata trace in the file. This metadata can be used by Sony’s free desktop software Catalyst Browse to correct the shakes and stabilize the footage in post. As you can see in Gordon Laing’s video, the results are very impressive, almost gimbal-like! This was also picked up by some other YouTubers like Camera Conspiracies and Lens Library. Sure, it’s another step in post production that you have to do (and the software seems to take its time to process footage) but the prospect of not having to pack a gimbal and balance it and instead becoming even more mobile, is very promising in my opinion.
Now how does this relate to smartphone videography? As you might know, all modern smartphones (unlike most traditional cameras) do have gyro sensors in them, the most basic thing they’re good for is to control the screen’s orientation (portrait or landscape) based on how you’re holding your phone. Why not take advantage of this in a more advanced way to record gyro metadata when capturing video? Google already has a pretty amazing and free software stabilization feature in its Android version of Photos (many still don’t know about it!) but I’m quite sure this is not (yet) based on recorded gyro metadata. While it might not be that easy for a 3rd party app like Filmic Pro to syphon the gyro metadata off the sensor it should be generally possible. And what’s more: With the smartphone being not only a camera but also a computer that runs software, the post stabilization process (it might be too much for a processor to handle this in real time while shooting!) could be done on the very same device unlike when shooting on a DSLM like the A7c. Of course this would also mean that we need some sort of a mobile Catalyst Browse app for Android and iOS but maybe pro mobile video editing apps like LumaFusion or KineMaster could make this happen in the near future? It will require powerful processors but I think I’m hardly exaggerating when I say that most modern flagship phones can be more powerful than a lot of desktop computers. I’m not a software developer so maybe I’m asking too much (at least right now) but I sure think it’s worth a thought, well actually more than just one!
What do you think? Would this be something you are interested in? How do you like the results? Let me know in the comments or hit me up on the Twitter @smartfilming. If you like this blog post, do consider signing up for my Telegram Newsletter where you will be notified about new blog posts.
As I pointed out in one of my very first blog posts here (in German), smartphone videography still comes with a whole bunch of limitations (although some of them are slowly but surely going away or have at least been mitigated). Yet one central aspect of the fascinating philosophy behind phoneography (that’s the term I now prefer for referring to content creation with smartphones in general) has always been one of “can do” instead of “can’t do” despite the shortcomings. The spirit of overcoming obvious obstacles, going the extra mile to get something done, trailblazing new forms of storytelling despite not having all the bells and whistles of a whole multi-device or multi-person production environment seems to be a key factor. With this in mind I always found it a bit irritating and slightly “treacherous” to this philosophy when people proclaimed that video editing apps without the ability to have a second video track in the editing timeline are not suitable for storytelling. “YOU HAVE TO HAVE A VIDEO EDITOR WITH AT LEAST TWO VIDEO TRACKS!” Bam! If you are just starting out creating your first videos you might easily be discouraged if you hear such a statement from a seasoned video producer. Now let me just make one thing clear before digging a little deeper: I’m not saying having two (or multiple) video tracks in a video editing app as opposed to just one isn’t useful. It most definitely is. It enables you to do things you can’t or can’t easily do otherwise. However, and I can’t stress this enough, it is by no means a prerequisite for phoneography storytelling – in my very humble opinion, that is.
I can see why someone would support the idea of having two video tracks as being a must for creating certain types of videography work. For instance it could be based on the traditional concept of a news report or documentary featuring one or more persons talking (most often as part of an interview) and you don’t want to have the person talking occupying the frame all the time but still keep the statement going. This can help in many ways: On a very basic level, it can work as a means for visual variety to reduce the amount of “talking heads” air time. It might also help to cover up some unwanted visual distraction like when another person stops to look at the interviewee or the camera. But it can also exemplify something that the person is talking about, creating a meaningful connection. If you are interviewing the director of a theater piece who talks about the upcoming premiere you could insert a short clip showing the theater building from the outside, a clip of a poster announcing the premiere or a clip of actors playing a scene during the rehearsal while the director is still talking. The way you do it is by adding the so-called “b-roll” clip as a layer to the primary clip in the timeline of the editing app (usually muting the audio of the b-roll or at least reducing the volume). Without a second video track it can be difficult or even impossible to pull off this mix of video from one clip with the audio from another. But let’s stop here for a moment: Is this really the ONLY legitimate way to tell a story? Sure, as I just pointed out, it does have merit and can be a helpful tool – but I strongly believe that it’s also possible to tell a good story without this “trick” – and therefore without the need for a second video track. Here are some ideas:
Most of us have probably come across the strange acronym WYSIWYG: “What you see is what you get” – it’s a concept from computational UI design where it means that the preview you are getting in a (text/website/CMS) editor will very much resemble the way things actually look after creating/publishing. If you want a word to appear bold in your text and it’s bold after marking it in the editor, this is WYSIWYG. If you have to punch in code like <b>bold</b> into your text editing interface to make the published end result bold, that’s not WYSIWYG. So I dare to steal this bizarre acronym in a slightly altered version and context: WYSIWYH – “What you see is what you hear” – meaning that your video clips always have the original sound. So in the case of an interview like described before, using a video editing app with only one video track, you would either present the interview in one piece (if it’s not very long) or cut it into smaller chunks with “b-roll” footage in between rather than overlaid (if you don’t want the questions included). Sure, it will look or feel a bit different, not “traditional”, but is that bad? Can’t it still be a good video story? One fairly technical problem we might encounter here is getting smooth audio transitions between clips when the audio levels of the two clips are very different. Video editing apps usually don’t have audio-only cross-fades (WHY is that, I ask!) and a cross-fade involving both audio AND video might not be the preferred transition of choice as most of the time you want to use a plain cut. There are ways to work around this however or just accept it as a stylistic choice for this way of storytelling.
Another very interesting way that results in a much easier edit without the need for a second video track (if any at all) but includes more pre-planning in advance for a shoot is the one-shot approach. In contrast to what many one-man-band video journalists do (using a tripod with a static camera), this means you need to be an active camera operator at the same time to catch different visual aspects of the scene. This probably also calls for some sort of stabilization solution like phone-internal OIS/EIS, a rig, a gimbal or at least a steady hand and some practice. Journalist Kai Rüsberg has been an advocate of this style and collected some good tips here (blog post is in German but Google Translate should help you getting the gist). As a matter of fact, there’s even a small selection of noticeable feature films created in such a (risky) manner, among them “Russian Ark” (2002) and “Viktoria” (2015). One other thing we need to take into consideration is that if there’s any kind of asking questions involved, the interviewer’s voice will be “on air” so the audio should be good enough for this as well. I personally think that this style can be (if done right!) quite fascinating and more visually immersive than an edited package with static separate shots but it poses some challenges and might not be suited for everybody and every job/situation. Still, doing something like that might just expand your storytelling capabilities by trying something different. A one-track video editing app will suffice to add some text, titles, narration, fade in/out etc.
A unique almagam of a traditional multi-clip approach and the one-shot method is a technique I called “shediting” in an earlier blog post. This involves a certain feature that is present in many native and some 3rd party camera apps: By pausing the recording instead of stopping it in between shots, you can cram a whole bunch of different shots into a single clip. Just like with one-shot, this can save you lots of time in the edit (sometimes things need to go really fast!) but requires more elaborate planning and comes with a certain risk. It also usually means that everything needs to be filmed within a very compact time frame and one location/area because in most cases you can’t close the app or have the phone go to sleep without actually stopping the recording. Nonetheless, I find this to be an extremely underrated and widely unknown “hack” to piece together a package on the go! Do yourself a favor and try to tell a short video story that way!
A way to tackle rough audio transitions (or bad/challenging sound in general) while also creating a sense of continuity between clips is to use a voice-over narration in post production, most mobile editors offer this option directly within the app and even if you happen to come across one that doesn’t (or like Videoshop, hides it behind a paywall) you can easily record a voice-over in a separate audio recording app and import the audio to your video editor although it’s a bit more of a hassle if you need to redo it when the timing isn’t quite right. One example could be splicing your interview into several clips in the timeline and add “b-roll” footage with a voice-over in between. Of course you should see to it that the voice-over is somewhat meaningful and not just redundant information or is giving away the gist / key argument of an upcoming statement of the interviewee. You could however build/rephrase an actual question into the voice-over. Instead of having the original question “What challenges did you experience during the rehearsal process?” in the footage, you record a voice-over saying “During the rehearsal process director XY faced several challenges both on and off the stage…” for the insert clip followed by the director’s answer to the question. It might also help in such a situation to let the voice-over already begin at the end of the previous clip and flow into the subsequent one to cover up an obvious change in the ambient sound of the different clips. Of course, depending on the footage, the story and situation, this might not always work perfectly.
Finally, with more and more media content being consumed muted on smartphones “on the go” in public, one can also think about having text and titles as an important narrative tool, particularly if there’s no interview involved (of course a subtitled interview would also be just fine!). This only works however if your editing app has an adequate title tool, nothing too fancy but at least covering the basics like control over fonts, size, position, color etc. (looking at you, iMovie for iOS!). Unlike adding a second video track, titles don’t tax the processor very much so even ultra-budget phones will be able to handle it.
Now, do you still remember the second part of this article’s title, the one in parentheses? I have just gone into lengths to explain why I think it’s not always necessary to use a video editing app with at least two video tracks to create a video story with your phone, so why would I now be saying that after all it doesn’t really matter that much anymore? Well, if you look back a whole bunch of years (say around 2013/2014) when the phoneography movement really started to gather momentum, the idea of having two video tracks in a video editing app was not only a theoretical question for app developers, thinking about how advanced they WANTED their app to be. It was also very much a plain technical consideration, particularly for Android where the processing power of devices ranged from quite weak to quite powerful. Processing multiple video streams in HD resolution simultaneously was no small feat at the time for a mobile processor, to a small degree this might even still be true today. This meant that not only was there a (very) limited selection of video editing apps with the ability to handle more than just one video track at the same time, but even when an app like KineMaster or PowerDirector generally supported the use of multiple video tracks, this feature was only available for certain devices, excluding phones and tablets with very basic processors that weren’t up to the task. Now this has very much changed over the last years with SoCs (System-on-a-chip) becoming more and more powerful, at least when it comes to handling video footage in FHD 1080p resolution as opposed to UHD/4K! Sure, I bet there’s still a handful of (old) budget Android devices out there that can’t handle two tracks of HD video in an editing app but mostly, having the ability to use at least two video tracks is not really tied to technical restraints anymore – if the app developers want their app to have multi-track editing then they should be able to integrate that. And you can definitely see that an increasing number of video editing apps have (added) this feature – one that’s really good, cross-platform and free without watermark is VN which I wrote about in an earlier article.
So, despite having argued that two video tracks in an editing app is not an absolute prerequisite for producing a good video story on your phone, the fact that nowadays many apps and basically all devices support this feature very much reduces the potential conflict that could arise from such an opinion. I do hope however that the mindset of the phoneography movement continues to be one of “can do” instead of “can’t do”, exploring new ways of storytelling, not just producing traditional formats with new “non-traditional” devices.
As usual, feel free to drop a comment or get in touch on the Twitter @smartfilming. If you like this blog, consider signing up for my Telegram channel t.me/smartfilming.
Have you ever had sleepless nights wondering whether the video recording app you are using really shoots in the frame rate and bitrate that it says it does? What’s the codec of the video file that was just sent to me? And (how much) does my editing app of choice crunch the bitrate (“quality”) of the original clips when exporting the project? No? Good for you, you may skip this article! But since you are already here you might as well read it anyway! I’m going to look at three different apps, one Android-only, one iOS-only and one that is available for both Google’s and Apple’s mobile platform.
You might ask, “Do I really need an extra app to get some basic info about a video file? I can do that with a keyboard shortcut on my desktop computer!”. Well, yes and no. That depends on what you consider to be “basic info”. Generally speaking, it’s a lot easier to access some standard file properties on Android than on iOS. Not only does pretty much every device come with a file manager that actually deserves the name but you will also be able to get file size, file container format and the resolution from the device’s Gallery app. Usually, an option labeled “Details” or “Info” is available in a menu after selecting a video clip. One would think that such trivialities should also be accessible from iOS’s Camera Roll, but … no. In case you don’t know, “Gallery” (Android) and “Camera Roll” (iOS) refer to the “image bucket” where all photos and videos go unless they are stored directly within an app. The only info about a video you get in Apple’s Camera Roll is the length of the video. Yes, there’s a way to get a little bit more data without installing a 3rd party app: Select a clip in the Camera Roll and share-copy it to iOS’s “Files” app by choosing “Save to Files” from the share options. Tap on “On My iPhone/iPad” and select any folder (or create a new one!) where you want the copy to go, then tap on “Save” in the top-right corner. Next, open the Files app, locate the file and long press on it. From the pop-up menu, select “Info”. You will now at least know the file size and the resolution (“Dimensions”) of the video. A tad tedious? Seriously? Ok…
Head on over to the Apple AppStore and download an app called Metapho (shout-out to Mr Marc Blank-Settle who initially pointed me towards it). The app is free (never mind the App Store always telling you that it’s “processing payment” when downloading an app, even if it’s free!) and will give you the following info for video files: File container format (usually it’s a Quicktime Movie aka .mov), length, frame rate, resolution, file size and video codec (in most cases either H.264 or the newer HEVC/H.265). There’s an in-app purchase for 4.49€ but it doesn’t give you more in-depth specs, “only” other additional features like removing or altering the metadata. If you need to dig deeper and are curious about video and audio bitrates, audio codec, audio sample rate etc. you will need another app though.
But first, let’s move on to Android for a second. If you want more detailed information about a video file than you can pull from the system’s Gallery app or file manager, go have a look at an app called VidTrim. VidTrim is primarily meant to be a simple one clip video editor with which you can trim a clip, transcode it or extract the audio as an mp3 file. But I don’t think I have ever used it for such purposes. Instead, it’s my go-to app for moderately detailed info about a video’s properties: resolution (“Picture Size”), file size, rotation, frame rate, audio codec, video codec, video bitrate and audio bitrate. There’s a paid version for 3.29€ by the name of VidTrim Pro but unless you are bothered by the ads or want to export a video from the app without a watermark, you are totally fine with the free version.
If the metadata available in VidTrim is still not good enough for you, you should check out the app MediaInfo which is also available for iOS (although with a little catch). MediaInfo is a well-known standard tool on desktop computers for many video production professionals. There was a time when I wished, MediaInfo would launch a mobile app. And well, they did in late 2018! The UI isn’t really pretty to look at when viewed in portrait orientation (scaling needs to be improved!) so unless you are using it on a tablet, you should always hold the device in landscape mode when working with MediaInfo. I will refrain from listing every single video file property that MediaInfo gives you because, taking the risk I might be wrong on that after all, it appears to me that it basically exposes every bit of metadata there is. So if you really want to go down the rabbit hole, have at it! MediaInfo is free without any ads and full core functionality. There’s the option to support the development of the app with a subscription of 5€ per year, the bonus features including the use of a dark mode are not really spectacular though. Before wrapping this up, I need to add a quick note about using the iOS version of MediaInfo, coming full circle so to speak. While the app’s functionality is no different from the Android version, accessing files can be really painful if the file you want to check out is located within the Camera Roll and not the Files app. For some reason, MediaInfo doesn’t access the Camera Roll, but only the Files app. It’s also not possible to share to MediaInfo from the Camera Roll. This basically means that you need to copy the file you want to inspect from the Camera Roll to the Files app to access it from MediaInfo. As you might remember I explained how to do just that earlier on. It ain’t pretty, but that’s the way it is at the moment. I have contacted the developer about this and they have acknowledged the problem so there might be a fix in the near future.
One last thing: If you only need certain file properties of a video, you might be able to see those in the media library of advanced camera apps already, but the info is usually limited and it’s also good to double check outside the app you shot your footage with.
As usual, comments and questions are welcome here or on Twitter @smartfilming. If you like my blog in general, consider signing up for my Telegram channel t.me/smartfilming.
One of the most fascinating and convenient things about a good modern smartphone is that it lets you do a whole video production workflow involving capturing, editing and publishing on a single device, thereby offering the opportunity to eliminate the tedious but usually mandatory process of having to transfer media files between several devices to get all this done. Depending on the situation however, there’s still a certain need for file transfer solutions. You might be shooting on a phone but want to edit on a tablet with a larger screen, someone else could be the one editing your captured footage or you want to receive footage from another person to incorporate into your phone edit. Of course, nowadays, we want everything to be wireless if possible.
Both major mobile platforms, Apple’s iOS and Google’s Android include the option to wirelessly transfer files to another (nearby) device running the same operating system, using Bluetooth for the devices to find and connect to each other and a WiFi protocol for the actual transfer – no internet connection required! Apple provides the easier and more straight-forward way with its AirDrop feature baked right into the OS while Google requires you (and the receiver) to install its Files by Google app (they also seem to be working on an AirDrop equivalent called “Nearby Sharing” that could launch with the next official version of Android, Android 11). Things get a bit more complicated however if you want to transfer files between the two platforms. Don’t despair though, you do have options depending on what kind of transfer method you prefer.
There are basically four different ways: cloud, temp cloud, device-to-device with internet and device-to-device without internet. You will need an active internet connection for the first three options, you won’t need one for the fourth.
I’m sure most of us are pretty familiar with some kind of cloud storage service: Dropbox, Google Drive, iCloud, Microsoft OneDrive, Box etc. You can upload files to a cloud server and access/download them from anywhere, anywhere with an internet connection that is. The good thing is that no matter what mobile platform you are on, you already have some free cloud storage at your fingertips: Google gives you 15GB of free cloud storage on Google Drive and unless you are rocking a very recent Huawei phone, using an Android device basically means you already have a Google account and Google Drive pre-installed on your phone. Apple is a bit more stingy and gives you only 5GB of free iCloud storage. And even if we ignore the amount of free storage, Google Drive is the better cross-platform choice because it’s also available for iOS while there’s no iCloud app for Android. All other major cloud storage solutions including Dropbox, Microsoft OneDrive or Box have apps for both Android and iOS, so no problem there. Another somewhat uncommon choice could be the messenger app Telegram. As I pointed out in my last blog post, Telegram gives you unlimited cloud storage for free. However, while the maximum file size of 1.5GB per file is huge compared to what you can send with other messenger apps, it can’t compete with dedicated cloud storage services in this regard, Google Drive and Dropbox for instance have no file size limit at all, OneDrive recently expanded from 15 to 100GB per file which should cover most common use cases. Generally, you should be aware of the fact that unlike with the device-to-device solutions mentioned later on, the file is not transferred directly to the other device’s storage. Once it’s been uploaded into the cloud from device A, device B needs to download it from there if you (or another person) want(s) to work with it. If you/someone else are/is not using the same cloud service account on both devices this involves the sharing of a download link. Things to consider here are also the available upload/download speed and the consumption of data if you are using mobile internet. Uploading/downloading big video files via mobile data can wreak havoc on your data plan – at least in certain countries… So better make sure you’re connected to a fast wifi network. All mentioned services send files in their original quality without compression as far as I could see.
I don’t think “temp cloud” is an actual term but I was looking for a word to describe file transfer services that allow you to temporarily store something in their cloud and create a shareable download link but where the file will be automatically deleted after a short period of time. The most popular service like that on the web is probably WeTransfer. They used to have mobile apps for Android and iOS as well but they discontinued them some time ago, replacing them with an app called Collect. The all-new UI and different structure have generated a lot of backlash from WeTransfer fans though. While it’s true that the whole “Boards” layout can be confusing, one can get the same transfer job done with Collect adding files to a “Board” within the app and then sharing the “Board” with a download link which expires automatically within 90 days. And that’s not all, unlike with the web service there’s no file size limit! In case Collect remains a mystery to you, WeTransfer is still available as a web service with the option to send files of up to 2GB in the free version (20GB in the paid pro version). If you want to send your files encrypted for security reasons, you should have a look at an offering from Mozilla’s popular Firefox brand: Firefox Send. You can send files with up to 1GB and an expiration time of one day without creating a free account, up to 2.5GB and an expiration time of seven days with a free account. It’s still in beta and only available as a mobile app on Android, you can however access the service on iOS as well via a web browser. A popular choice for all kinds of file transfers is Send Anywhere (which will pop up again in the coming paragraphs). While I mostly use Send Anywhere for device-to-device file transfer, they also have the option to send files via temp cloud / download link. You will however have to create a (free) account with them to use this feature. File size limit is 10GB for the free account, 50GB for the paid Plus account. Send Anywhere sports excellent mobile apps for both Android and iOS. Another service that I just recently discovered is the Norwegian company Filemail which also has mobile apps for both Android and iOS. Their file size allowance is huge, a whopping 50GB, but the free version only lets you do two transfers a day. Still a pretty cool option so you should give it a go! You can choose either one day or seven days for the link to expire. All mentioned services send files in their original quality without compression as far as I could see.
Device-to-Device with internet
If you don’t want to use a service that stores your files on an external cloud server but prefer a direct transfer between two devices, Send Anywhere is a good choice again. Do note that despite the fact that I’m talking about device-to-device transfer, you will need an internet connection and it will use up data if you’re on mobile internet. Transfer speeds depend on the upload/download speed available. Unlike the two cloud solutions with a download link, this way is particularly useful if the device you want to transfer to is right next to you and the file will be used right away. Both devices need to have Send Anywhere installed and open, unless you want to use their web service via a browser which has a file size limitation. After selecting the files you want to send, the sending device will generate a 6-digit key which needs to be entered on the receiving device within a time frame of 10 minutes to initiate the transfer. While there is no file size limit, do make sure that the receiving device has enough free storage available! With Feem there’s another good choice available for Android and iOS. It’s the same principle but works slightly differently from Send Anywhere: After opening the app on both devices, they should detect each other (in the free version the app automatically assigns silly nicknames like “Lonely Gecko” or “Reckless Chicken” to the devices). You then tap on the listed device you want to send files to, choose “Send File” and select the files you want to send, finally tapping the “send” button. Important note: Unlike Send Anywhere (which can also utilize mobile data), Feem only works if both devices are connected to the same WiFi network! Feem is free to use. It has a paid pro version (annual subscription of 4.99 Euro/US-Dollar) which gives you a whole bunch of customization options for the device name, avatar, download folder but nothing really essential. All mentioned services send files in their original quality without compression as far as I could see.
Device-to-Device without internet
Unlike with all the aforementioned options, Feem also has the ability to work across platforms without an active internet connection which makes it pretty unique and a lifesaver for certain situations! While Send Anywhere has an option to share files device-to-device without the use of an active internet connection via the WiFi Direct protocol, this is only available between Android devices as iOS doesn’t support the standard, so I will have to exclude it here for the purpose of this article. The magic trick in Feem is done by the Android device creating a local WiFi network to which the iOS device can connect (it doesn’t work the other way round but that’s not really a problem). Feem gives you a pretty good step-by-step guide how to do this upon opening the app so I won’t get into the details here (it’s not that complicated, don’t worry!) but you basically switch on the “Turn on Wi-Fi Direct” button in the app and a pop-up with the hotspot name and password appears which you then use to connect your iPhone or iPad to this network and commence with your file transfer from within the app. This is a great feature which you can either use if there’s no internet available at all or you don’t want to use up data. The app is not 100% stable all the time so you might have to redo a transfer on occasion but in general I have found it to work quite well.
One final note: With many/most services you will also be able to send a file using their app without having to open the app first. You can locate a video file in the Gallery (Android) or Camera Roll (iOS) and then use the OS’s share sheet to send the selected file using the file transfer app of your choice.
I’m sure there are many other options out there so this article is by no means a complete overview but just a highly personal selection of available choices that I deem worth checking out. Feel free to drop comments and questions here or hit me up on the Twitter @smartfilming. You can also sign up for my Telegram channel t.me/smartfilming.
Telegram started out in 2013, founded by Russian brothers Nikolai and Pawel Durow who had already created “Russia’s Facebook”, VK. While it was able to avoid being seen as “the Kremlin messenger”, its claims of providing an experience that is very strong in terms of security and data protection have received some flak from experts. It also came into questionable spotlight as the preferred modus communicandi of the so-called “Islamic State” and other extremist groups that want to avoid scrutiny from intelligence agencies. But this is just some general context and everyone can decide for herself/himself what to make of it.
The reason for this article has nothing to do with the aforementioned “historical” context but looks only at the app’s potentially useful functionality when it comes to media production, particularly video production. People are sending enormous amounts of video these days via their messenger apps. For reasons benefitting the sender/receiver as well as the app service provider itself, those videos are usually compressed, both in terms of resolution and bitrate. The compression results in smaller file sizes which lets you send/receive them faster, use up less storage space and avoid burning through too much mobile data. This works pretty well when all you do is watch the video in your messenger app, it’s far from ideal however if you want to work with the video somebody sent you.
While there is a way to prevent the app from automatically compressing your video by sending/attaching it not as a video (which is the usual way of doing it) but as a file (as you would normally add a doc or pdf), the file size limit of most messenger apps is so small that it’s not really suitable for sending video files that are longer than one minute. WhatsApp has a current file size limit of 100MB and so does Signal. Threema tops out at 50MB for sending files uncompressed while Facebook Messenger gives you a measly 25MB! Just for measure: a moderate bitrate of 16Mbit/s for a FHD 1920×1080 video will reach the 100MB limit at only 50 seconds. In this regard, Telegram is basically lightyears ahead of the competition as it lets you send uncompressed files up to 2 GB (around 2000 MB), yes you heard that right!
To send an uncompressed video file within Telegram, tap on the paper clip icon in a chat, select “File” (NOT “Gallery”) and then “Gallery. To send images without compression” (or choose one of the other options if your video file is located somewhere else on the device). It’s that easy! There’s also a cool way to use Telegram as your personal unlimited cloud storage: If you open the app’s menu (tapping on the three lines in the top left corner) you will find an option that says “Saved Messages”. This is basically your own personal space within the app where you can collect all kinds of material like notes, links or files. As long as the file doesn’t exceed 2 GB, you can upload it into this “self chat” like you would in a regular cloud storage service like Dropbox, GoogleDrive or OneDrive. And believe it or not, you currently get UNLIMITED storage for free! I think there’s a chance that Telegram might cap this at some point in the future if people start using it too excessively but up until then, this is a pretty amazing feature most users don’t know about (even I didn’t until a few days ago!).
This benefit gets even more powerful when you consider that you can use Telegram across several devices (it’s not only available for Android, iOS and Windows 10 Mobile but also has desktop apps for Windows and MacOS!) with the same account, something you can’t do with other messengers like WhatsApp which ties you to a single mobile device for active use of one account. A side note though: If you have someone send you a big uncompressed video file over mobile data, you might want to tell the other person that it will cut into their mobile data significantly. So if possible, they should send it when logged into a WiFi network.
And even if your goal is actually to compress a video when sending it, Telegram gives you the best choices to do so. When selecting a video via the Gallery button (instead of the File button) you can adjust the resolution of the clip by using the app’s recently updated in-app video editor. After marking your clip of choice by tapping on the empty circle in the top right corner of the video’s thumbnail, tap on the thumbnail itself to open the video editor. You will be able to trim the clip or add a drawing/text/sticker (brush icon). You can even do some basic color correction (sliders icon), I kid you not! And you can adjust the video resolution by tapping on the gear icon in the bottom left corner of the tool box. By moving the slider you can choose between FHD 1920×1080, HD 1280×720, SD 854×480 and what I will call “LD” (low definition) 480×270.
If your primary focus when using messenger apps is most comprehensive security / data protection or mass compatibility and you don’t need to use the app as a tool for direct (video) file transfer, then you might still prefer Signal, Threema or WhatsApp respectively. Otherwise Telegram is a powerful tool with best-in-class features for a professional video production workflow.
So despite the fact that Telegram is still far from being as ubiquitous as WhatsApp or Facebook Messenger, it has significantly increased its user base in the last months and years (currently over half a billion installs from the Google Play Store!) and chances are getting better that the person sending video to you is using it or has at least installed it on her/his phone.
Questions and comments are welcome, either below in the comment section or on Twitter @smartfilming. I also just created my own Telegram channel which you can join here: https://t.me/smartfilming.
Filmic Pro might be called the “Gold Standard” for highly advanced mobile video recording apps on both Android and iOS, it surely is the most popular and widely known one. Even Oscar-winning director Steven Soderbergh has used it to shoot two of his feature films. The fact that a powerful rival has just recently launched is bigger news for Android users though than for those on iOS. There are a couple of very capable alternatives to Filmic Pro on iOS including Mavis, MoviePro and Moment Pro Camera. While options are available on Android as well they are not as numerous and/or complete and for quite a few development has either ceased completely (Cinema FV-5 and recently Moment Pro Camera) or for the most part been reduced to bug fixes or minor compatibility adjustments (Cinema 4K, Lumio Cam, ProShot). There’s also the solid free Open Camera (plus a whole range of variants based on its open source code) and the pretty good Footej Camera 2 but none of them can really match Filmic Pro when it comes to usability and advanced features. That is until now.
Only two weeks ago, an app called Protake – Mobile Cinema Camera popped up in the Google Play Store (and also the Apple App Store). The screenshots looked quite promising and after downloading it and taking it for a quick spin I can confirm that there’s now another immensely powerful mobile video recording app available for both Android and iOS. Protake gives you full manual control over exposure (shutter speed/angle and ISO), focus and white balance, you get support for external mics and a visual audio level meter plus the ability to adjust input gain, a whole set of exposure and focus assistants (zebra, false color, focus peaking, waveform monitor, RGB parade, histogram), different aspect ratios (including different widescreen formats and square but apparently skipping 9:16 vertical), frame rates (incl. 25fps, but not 50/60 on any of my devices – but that might be different for other phones), resolutions, bitrates (they don’t go as high as Filmic Pro’s though), codecs (H.264/H.265), color profiles/looks etc.. You even have an interesting option called “Frame Drop Notice” which I have never seen anywhere else before and some useful one-tap quick buttons for hiding the UI or switching between maximum screen brightness and current brightness. There’s also support for external accessories like Zhiyun gimbals, anamorphic lenses or a DOF adapter. All in all, it’s a feature range almost as complete as FilmicPro’s and the UI is slick and intuitive.
There is however one catch: While you can download the app for free and also use the auto mode to record, you can only activate recording for the pro mode (including manual controls and most advanced features) by buying a subscription. The subscription model has become a common practice for many apps in the last years (particularly for video editing apps) but so far I hadn’t really encountered it in a camera app. The subscription price is 10.99 Euros (9.99 US-Dollars) per year which is somewhat moderate compared to other apps (if you break it down it’s less than 1 Euro per month) but as I said, it’s new for this kind of app (at least to me!) so it might need a bit getting used to. It should be noted that the current price is a 50% off offer so the regular price would actually be double, venturing into financial territory not too many of us might be willing to follow. There’s another thing to keep in mind which probably isn’t of any relevance to most users but definitely to someone like me with a whole zoo of different phones: The subscription will only let you use the pro mode on three different devices at the same time. So if you want to use it on more than three I suppose you will need to buy a second subscription. This should however be a very rare use case.
One last thing: If you are on Android, please note that most features of the pro mode (like setting specific values for shutter speed and ISO) are only available if your Android device fully supports Camera2 API, which lets apps of 3rd party developers access the more advanced functionality of the phone’s camera. If Camera2 API support hasn’t been implemented properly by the maker of the phone, 3rd party apps can’t access certain features no matter how capable their developers are. As a rule of thumb, relatively current flagship phones and midrangers usually have sufficient Camera2 API support, entry level phones only sometimes. If you want to learn more about the topic, check out this older blog post by me.
Let me know what you think of Protake! Either here in the comments or on Twitter @smartfilming.