smartfilming

Exploring the possibilities of video production with smartphones

#46 Top tips for smartphone videography in the summer — 28. June 2021

#46 Top tips for smartphone videography in the summer

Photo: Julia Volk via Pexels.com

It’s the dog days of summer again – well at least if you live in the northern hemisphere or near the equator. While many people will be happy to finally escape the long lockdown winter and are looking forward to meeting friends and family outside, intense sunlight and heat can also put extra stress on the body – and it makes for some obvious and less obvious challenges when doing videography. Here are some tips/ideas to tackle those challenges.

Icon: Alexandr Razdolyanskiy via The Noun Project

Find a good time/spot!
Generally, some of the problems mentioned later on can be avoided by picking the right spot and/or time for an outdoor shoot during the summertime. Maybe don’t set up your shot in the middle of a big open field where you and your phone are totally exposed to the full load of sunshine photons at high noon. Rather, try to shoot in the morning, late afternoon or early evening and also think about picking a spot in the shadows. Or choose a time when it’s slightly overcast. Of course it’s not always possible to freely choose time and spot, sometimes you just have to work in difficult conditions.

„Bum to the sun“ – yes or no?
There’s a saying that you should turn your „bum to the sun“ when shooting video. This definitely holds some truth as pointing the lens directly towards the sun can cause multiple problems, including unwanted lens flare effects, underexposed faces or a blown out background. You can however also create artistically interesting shots that way (silhouettes for instance) and the „bum to the sun“ motto comes with problems of its own: If you are shooting away from the sun but the person you are filming is looking directly towards it, they could be blinded by the intense sunlight and squint their eyes which doesn’t look very favorable. If the sun is low you also might have your own shadow in the shot. So I think the saying is something to take into consideration but shouldn’t be adhered to exclusively and in every situation.

Check the sky!
Clouds can severely impact the amount of sunlight that reaches the ground. So if you have set up an interview or longer shot and locked the exposure at a given time when there isn’t a single cloud in front of the sun, there might be a nearby one crawling along already that will take away lots of light later on and give you an underexposed image at some point. Or vice versa. So either do your thing when there are no (fast moving) clouds in the vicinity of the sun or when the cloud cover will be fairly constant for the next minutes.

Use an ND filter!
As I pointed out in my last blog post The Smartphone Camera Exposure Paradox, a bright sunny day can create exposure problems with a smartphone if you want to work with the „recommended“ (double the frame rate, for instance 1/50s at 25fps) or an acceptable shutter speed because phones only have a fixed, wide-open aperture. Even with the lowest ISO setting, you will still have to use a (very) fast shutter speed that can make motion appear jerky. That’s why it’s good to have a neutral density (ND) filter in your kit which reduces the amount of light that hits the sensor. There are two different kinds of ND filters: fixed and variable. The latter one lets you adjust the strength of the filtering effect. Unlike with dedicated regular cameras, the lenses on smartphones don’t have a filter thread so you either have to use some sort of case or rig with a filter thread or a clip-on ND filter.

Get a white case!


Ever heard of the term “albedo“? It designates the amount of sunlight (or if you want to be more precise: solar radiation) that is reflected by objects. Black objects reflect less and absorb more solar radiation (smaller albedo) than white objects (higher albedo). You can easily get a feeling for the difference by wearing a black or a white shirt on a sunny day. Similarly, if you expose a black or dark colored phone to intense sunlight, it will absorb more heat than a white or light colored phone and therefore be more prone to overheating. So if you do have a black or dark colored phone, it might a good idea to get yourself a white case so more sunlight is reflected off of the device. Vice versa, if you have a white or light colored phone with a black case, take it off. Be aware though that a white case only reduces the absorption of „external heat“ by solar radiation, not internal heat generated by the phone itself, something that particularly happens when you shoot in 4K/UHD, high frame rates or bit rates. You should also take into consideration that a case that fits super tight might reduce the phone’s ability to dispense internal heat. Ergo: A white phone (case) only offers some protection against the impact of direct solar radiation, not against internal heat produced by the phone itself or high ambient temperatures.

Maximize screen brightness!
This is pretty obvious. Of course bright conditions make it harder to see the screen and judge framing, exposure and focus so it’s good to crank up the screen brightness. Some camera apps let you switch on a feature that automatically maximizes screen brightness when using the app.

Get a power bank!
Maximizing screen brightness will significantly increase battery consumption though so you should think about having a back-up power bank at hand – at least if you are going on a longer shoot. But most of us already have one or two so this might not even be an additonal purchase.

Use exposure/focus assistants of your camera app!
One thing that can be very helpful in bright conditions when it’s hard to see the screen are analytical assistant tools in certain camera apps. While there are very few native camera apps that offer some limited assistance in this respect, it’s an area where dedicated 3rd party apps like Filmic Pro, mcpro24fps, ProTake, MoviePro, Mavis etc. can really shine (pardon the pun). For setting the correct exposure you can use Zebra (displays stripes on overexposed areas of the frame) or False Color (renders the image into solid colors identifying areas of under- and overexposure – usually blue for underexposure and red for overexposure). For setting the correct focus you can use Peaking (displays a colored outline on things in focus) and Magnification (digitally magnifies the image). Not all mentioned apps offer all of the mentioned tools. And there’s also a downside: Using these tools puts extra stress on your phone’s chipset which also means more internal heat – so only use them when setting exposure and focus for the shot, turn them off once you are done.

Photo: Moondog Labs

Use a sun hood!
Another way to better see the screen in sunny weather is to use a sun hood. There are multiple generic smartphone sun hoods available online but also one from dedicated mobile camera gear company MoondogLabs. Watch out: SmallRig, a somewhat renowned accessory provider for independent videography and filmmaking has a sun hood for smartphones in its portfolio but it’s made for using smartphones as a secondary device with regular cameras or drones so there’s no cut-out for the lens or open back which renders it useless if you want to shoot with your phone. This cautionary advice also applies to other sun hoods for smartphones.

Photo: RollCallGames

Sweaty fingers?
An issue I encountered last summer while doing a bike tour where I occasionally would stop to take some shots of interesting scenery along the road was that sweaty hands/fingers can cause problems with a phone’s touch screen. Touches aren’t registered or at the wrong places. This can be quite annoying. Turns out that there’s such a thing as „anti-sweat finger sleeves“ which were apparently invented for passionate mobile gamers. So I guess kudos to PUBG and Fortnite aficionados? There’s also another option: You can use a stylus or pen to navigate the touch screen. Users of the Samsung Galaxy Note series are clearly at an advantage here as the stylus comes with the phone.

Photo: George Becker via Pexels.com

Don’t forget the water bottle!
Am I going to tell you to cool your phone with a refreshing shower of bottled drinking water? Despite the fact that many phones nowadays offer some level of water-resistance, the answer is no. I’m including this tip for two reasons: First, it’s always good to stay hydrated if you’re out in the sun – I have had numerous situations where I packed my gear bag with all kinds of stuff (most of which I didn’t need in the end) but forgot to include a bottle of water (which I desperately needed at some point). Secondly, you can use a water bottle as an emergency tripod in combination with a rubber band or hair tie as shown in workshops by Marc Settle and Bernhard Lill. So yes, don’t forget to bring a water bottle!

Got other tips for smartphone videography in the summertime? Let us know!

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#45 The Smartphone Camera Exposure Paradox — 11. May 2021

#45 The Smartphone Camera Exposure Paradox

Ask anyone about the weaknesses of smartphone cameras and you will surely find that people often point towards a phone’s low-light capabilities as the or at least one of its Achilles heel(s). When you are outside during the day it’s relatively easy to shoot some good-looking footage with your mobile device, even with budget phones. Once it’s darker or you’re indoors, things get more difficult. The reason for this is essentially that the image sensors in smartphones are still pretty small compared to those in DSLMs/DLSRs or professional video/cinema cameras. Bigger sensors can collect more photons (light) and produce better low light images. A so-called “Full Frame” sensor in a DSLM like Sony’s Alpha 7-series has a surface area of 864 mm2, a common 1/2.5” smartphone image sensor has only 25 mm2. So why not just put a huge sensor in a smartphone? While cameras in smartphones have undeniably become a very important factor, the phone is still very much a multi-purpose device and not a single-purpose one like a dedicated camera – for better or worse. That means there are many things to consider when building a phone. I doubt anyone would want a phone with a form factor that doesn’t allow you to put the phone in your pocket. And the flat form factor makes it difficult to build proper optics with larger sensors. Larger sensors also consume more power and produce more heat, not exactly something desirable. If we are talking about smartphone photography from a tripod, some of the missing sensor size can be compensated for with long exposure times. The advancements in computational imaging and AI have also led to dedicated and often quite impressive photography “Night Modes” on smartphones. But very long shutter speeds aren’t really an option for video as any movement appears extremely blurred – and while today’s chipsets can already handle supportive AI processing for photography, more resource-intensive videography is yet a bridge too far. So despite the fact that latest developments signal that we’re about to experience a considerable bump in smartphone image sensor sizes (Sony and Samsung are about to release a 1-inch/almost 1-inch image sensor for phones), one could say that most/all smartphone cameras (still) have a problem with low-light conditions. But you know what? They also have a problem with the exact opposite – very bright conditions!

If you know a little bit about how cameras work and how to set the exposure manually, you have probably come across something called the “exposure triangle”. The exposure triangle contains the three basic parameters that let you set and adjust the exposure of a photo or video on a regular camera: Shutter speed, aperture and ISO. In more general terms you could also say: Time, size and sensitivity. Shutter speed signifies the amount of time that the still image or a single frame of video is exposed to light, for instance 1/50 of a second. The longer the shutter speed, the more light hits the sensor and the brighter the image will be. Aperture refers to the size of the iris’ opening through which the light passes before it hits the sensor (or wayback when the film strip), it’s commonly measured in f-stops, for instance f/2.0. The bigger the aperture (= SMALLER the f-stop number), the more light reaches the sensor and the brighter the image will be. ISO (or “Gain” in some dedicated video cameras) finally refers to the sensitivity of the image sensor, for instance ISO 400. The higher the ISO, the brighter the image will be. Most of the time you want to keep the ISO as low as possible because higher sensitivity introduces more image noise. 

So what exactly is the problem with smartphone cameras in this respect? Well, unlike dedicated cameras, smartphones don’t have a variable aperture, it’s fixed and can’t be adjusted. Ok, there actually have been a few phones with variable aperture, most notably Samsung had one on the S4 Zoom (2013) and K Zoom (2014) and they introduced a dual aperture approach with the S9/Note9 (2018), held on to it for the S10/Note 10 (2019) but dropped it again for the S20/Note20 (2020). But as you can see from the very limited selection, this has been more of an experiment. The fixed aperture means that the exposure triangle for smartphone cameras only has two adjustable parameters: Shutter speed and ISO. Why is this problematic? When there’s movement in a video (either because something moves within the frame or the camera itself moves), we as an audience have become accustomed to a certain degree of motion blur which is related to the used shutter speed. The rule of thumb applied here says: Double the frame rate. So if you are shooting at 24fps, use a shutter speed of 1/48s, if you are shooting at 25fps, use a shutter speed of 1/50s, 1/60s for 30fps etc. This suggestion is not set in stone and in my humble opinion you can deviate from it to a certain degree without it becoming too obvious for casual, non-pixel-peeping viewers – but if the shutter speed is very slow, everything begins to look like a drug-induced stream of consciousness experience and if it’s very fast, things appear jerky and shutter speed becomes stutter speed. So with the aperture being fixed and the shutter speed set at a “recommended” value, you’re left with ISO as an adjustable exposure parameter. Reducing the sensitivity of the sensor is usually only technically possible down to an ISO between 50 and 100 which will still give you a (heavily) overexposed image on a sunny day outside. So here’s our “paradox”: Too much available light can be just as much of an issue as too little when shooting with a smartphone.

What can we do about the two problems? Until significantly bigger smartphone image sensors or computational image enhancement for video arrives, the best thing to tackle the low-light challenge is to provide your own additional lighting or look for more available light, be it natural or artificial. Depending on your situation, this might be relatively easy or downright impossible. If you are trying to capture an unlit building at night, you will most likely not have a sufficient amount of ultra-bright floodlights at your hand. If you are interviewing someone in a dimly lit room, a small LED might just provide enough light to keep the ISO at a level without too much image noise.

Clip-on variable ND filter

As for the too-much-light problem (which ironically gets even worse with bigger sensors setting out to remedy the low-light problems): Try to pick a less sun-drenched spot, shoot with a faster shutter-speed if there is no or little action in the shot or – and this might be the most flexible solution – get yourself an ND (neutral density) filter that reduces the amount of light that passes through the lens. While some regular cameras have inbuilt ND filters, this feature has yet to appear in any smartphone, although OnePlus showcased a prototype phone last year that had something close to a proper ND filter, using a technology called “electrochromic glass” to hide the lens while still letting (less) light pass through (check out this XDA Developers article). So until this actually makes it to the market and proves to be effective, the filter has to be an external one that is either clipped on or screwed on if you use a dedicated case with a corresponding filter thread. You also have the choice between a variable and a non-variable (fixed density) ND filter. A variable ND filter will let you adjust the strength of its filtering effect which is great for flexibility but also have some disadvantages like the possibility of cross-polarization. If you want to learn more about ND filters, I highly recommend checking out this superb in-depth article by Richard Lackey.

So what’s the bigger issue for you personally? Low-light or high-light? 

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#43 The Rode Wireless Go II review – Essential audio gear for everyone? — 20. April 2021

#43 The Rode Wireless Go II review – Essential audio gear for everyone?

Australian microphone maker RØDE is an interesting company. For a long time, the main thing they had going for them was that they would provide an almost-as-good but relatively low-cost alternative to high-end brands like Sennheiser or AKG and their established microphones, thereby “democratizing” decent audio gear for the masses. Over the last years however, Rode grew from “mimicking” products of other companies to a highly innovative force, creating original products which others now mimicked in return. Rode was first to come out with a dedicated quality smartphone lavalier microphone (smartLav+) for instance and in 2019, the Wireless GO established another new microphone category: the ultra-compact wireless system with an inbuilt mic on the TX unit. It worked right out of the box with DSLMs/DSLRs, via a TRS-to-TRRS or USB-C cable with smartphones and via a 3.5mm-to-XLR adapter with pro camcorders. The Wireless GO became an instant runaway success and there’s much to love about it – seemingly small details like the clamp that doubles as a cold shoe mount are plain ingenuity. The Interview GO accessory even turns it into a super light-weight handheld reporter mic and you are also able to use it like a more traditional wireless system with a lavalier mic that plugs into the 3.5mm jack of the transmitter. But it wasn’t perfect (how could it be as a first generation product?). The flimsy attachable wind-screen became sort of a running joke among GO users (I had my fair share of trouble with it) and many envied the ability of the similar Saramonic Blink 500 series (B2, B4, B6) to have two transmitters go into a single receiver – albeit without the ability for split channels. Personally, I also had occasional problems with interference when using it with an XLR adapter on bigger cameras and a Zoom H5 audio recorder.

Now Rode has launched a successor, the Wireless GO II. Is it the perfect compact wireless system this time around?

The most obvious new thing about the GO II is that the kit comes with two TX units instead of just one – already know where we are headed with this? Let’s talk about it in a second. A first look at the Wireless GO II’s RX and TX units doesn’t really reveal anything new – apart from the fact that they are labled “Wireless GO II”, the form factor of the little black square boxes is exactly the same. That’s both good and maybe partly bad I guess. Good because yes, just like the original Wireless GO, it’s a very compact system, “partly bad” because I suppose some would have loved to see the TX unit be even smaller for using it standalone as a clip-on with the internal mic and not with an additional lavalier. But I suppose the fact that you have a mic and a transmitter in a single piece requires a certain size to function at this point in time. The internal mic also pretty much seems to be the same, which isn’t a bad thing per se, it’s quite good! I wasn’t able to make out a noticeable difference in my tests so far but maybe the improvements are too subtle for me to notice – I’m not an audio guy. Oh wait, there is one new thing on the outside: A new twist-mechanism for the wind-screen – and this approach actually works really well and keeps the wind-screen in place, even if you pull on it. For those of us who use it outdoors, this is really a big relief.

But let’s talk about the new stuff “under the hood”, and let me tell you, there’s plenty! First of all, as hinted at before, you can now feed two transmitters into one receiver. This is perfect if you need to mic up two persons for an interview. With the original Wireless GO you had to use two receivers and an adapter cable to make it work with a single audio input.

It’s even better that you can choose between a “merged mode” and a “split mode”. The “merged mode” combines both TX sources into a single pre-mixed audio stream, “split mode” sends the two inputs into separate channels (left and right on a stereo mix, so basically dual mono). The “split mode” is very useful because it allows you to access and adjust both channels individually afterwards – this can come in handy for instance if you have a two-person interview and one person coughs while the other one is talking. If the two sources are pre-mixed (“merged mode”) into the same channel, then you will not be able to eliminate the cough without affecting the voice of the person talking – so it’s basically impossible. When you have the two sources in separate channels you can just mute the noisy channel for that moment in post. You can switch between the two modes by pressing both the dB button and the pairing button on the RX unit at the same time. 

One thing you should be aware of when using the split-channels mode recording into a smartphone: This only works with the digital input port of the phone (USB-C on Android, Lightning on iPhone/iPad). If you use a TRS-to-TRRS cable and feed it into the 3.5mm headphone jack (or a 3.5mm adapter, like the one for the iPhone), the signal gets merged, as there is just one contact left on the pin for mic input – only allowing mono. If you want to use the GO II’s split channels feature with an iPhone, there’s currently only one reliable solution: Rode’s SC15 USB-C to Lightning cable which is a separate purchase (around 25 Euros) unfortunately. With Android it’s less restrictive. You can purchase the equivalent SC16 USB-C to USB-C cable from Rode (around 15 Euros) but I tested it with a more generic USB-C to USB-C cable (included with my Samsung T5 SSD drive) and it worked just fine. So if you happen two have a USB-C to USB-C cable around, try this first before buying something new. You should also consider that you need a video editing software that lets you access both channels separately if you want to individually adjust them. On desktop, there are lots of options but on mobile devices, the only option is currently LumaFusion (I’m planning a dedicated blog post about this). 

If you don’t need the extra functionality of the “split mode” or the safety channel and are happy to use it with your device’s 3.5mm port (or a corresponding adapter), be aware that you will still need a TRS-to-TRRS adapter (cable) like Rode’s own SC4 or SC7 because the included one from Rode is TRS-to-TRS which works fine for regular cameras (DSLMs/DSLRs) but not with smartphones which have a TRRS headphone jack – well, if they still have one at all, that is. It may all look the same at first sight but the devil is in the detail, or in this case the connectors of the pin.

If you want to use the GO II with a camera or audio recorder that has XLR inputs, you will need a 3.5mm to XLR adapter like Rode’s own VXLR+ or VXLR Pro.

Along with the GO II, Rode released a desktop application called Rode Central which is available for free for Windows and macOS. It lets you activate and fine-tune additional features on the GO II when it’s connected to the computer. You can also access files from the onboard recording, a new feature I will talk about in a bit. A mobile app for Android and iOS is not yet available but apparently Rode is already working on it.

One brilliant new software feature is the ability to record a simultaneous -12dB safety track when in “merged mode”. It’s something Rode already implemented on the VideoMic NTG and it’s a lifesaver when you don’t know in advance how loud the sound source will be. If there’s a very loud moment in the main track and the audio clips, you can just use the safety track which at -12dB probably will not have clipped. The safety channel is however only available when recording in “merged mode” since it uses the second channel for the back-up. If you are using “split mode”, both channels are already filled and there’s no space for the safety track. It also means that if you are using the GO II with a smartphone, you will only be able to access the safety channel feature when using the digital input (USB-C or Lightning), not the 3.5mm headphone jack analogue input, because only then will you have two channels to record into at your disposal.

Another lifesaver is the new onboard recording capability which basically turns the two TX units into tiny standalone field recorders, thanks to their internal mic and internal storage. The internal storage is capable of recording up to 7 hours of uncompressed wav audio (the 7 hours also correspond with the battery life which probably isn’t a coincidence). This is very helpful when you run into a situation where the wireless connection is disturbed and the audio stream is either affected by interference noise or even drop-outs.

There are some further options you can adjust in the Rode Central app: You can now activate a more nuanced gain control pad for the output of the RX unit. On the original GO, you only had three different settings (low, medium, high), now you have a total of 11 (in 3db steps from -30db to 0db). You can also activate a reduced sensitivity for the input of the TX units when you know that you are going to record something very loud. Furthermore, you can enable a power saver mode that will dim the LEDs to preserve some additional battery life.

Other improvements over the original GO include a wider transmission range (200m line-of-sight vs. 70m) and better shielding from RF interference.

One thing that some people were hoping for in an updated version of the Wireless GO is the option to monitor the audio that goes into the receiver via a headphone output – sorry to say that didn’t happen but as long as you are using a camera or smartphone/smartphone app that gives you live audio monitoring, this shouldn’t be too big of a deal.

Aside from the wireless system itself the GO II comes with a TRS-to-TRS 3.5mm cable to connect it to regular cameras with a 3.5mm input, three USB-C to USB-A cables (for charging and connecting it to a desktop computer/laptop), three windshields, and a pouch. The pouch isn’t that great in my opinion, I would have prefered a more robust case but I guess it’s better than nothing at all. And as mentioned before: I would have loved to see a TRS-to-TRRS, USB-C to USB-C and/or USB-C to Lightning cable included to assure out-of-the-box compatibility with smartphones. Unlike some competitors, the kit doesn’t come with separate lavalier mics so if you don’t want to use the internal mics of the transmitters you will have to make an additional purchase unless you already have some. Rode offers the dedicated Lavalier GO for around 60 Euros. The price for the Wireless GO II is around 300 Euros. 

So is the Rode Wireless GO II perfect? Not quite, but it’s pretty darn close. It surely builds upon an already amazingly compact and versatile wireless audio system and adds some incredible new features so I can only recommend it for every mobile videomaker’s gear bag. If you want to compare it against a viable alternative, you could take a look at the Saramonic Blink 500 Pro B2 which is roughly the same price and comes with two lavalier microphones or the Hollyland Lark 150.

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#36(0) The Insta360 One X2 – fun & frustration — 5. January 2021

#36(0) The Insta360 One X2 – fun & frustration

A couple of years ago, 360° (video) cameras burst onto the scene and seemed to be all the new rage for a while. The initial excitement faded relatively quickly however when producers realized that this kind of video didn’t really resonate as much as they thought it would with the public – at least in the form of immersive VR (Virtual Reality) content for which you need extra hardware, hardware that most didn’t bother to get or didn’t get hooked on. From a creator’s side, 360 video also involved some extra and – dare I say – fairly tedious workflow steps to deliver the final product (I have one word for you: stitching). That’s not to say that this extraordinary form of video doesn’t have value or vanished into total obscurity – it just didn’t become a mainstream trend. 

Among the companies that heavily invested in 360 cameras was Shenzen-based Insta360. They offered a wide variety of different devices: Some standalone, some that were meant to be physically connected to smartphones. I actually got the Insta360 Air for Android devices and while it was not a bad product at all and fun for a short while, the process of connecting it to the USB port of the phone when using it but then taking it off again when putting the phone back in your pocket or using it for other things quickly sucked out the motivation to keep using it.

Repurposing 360 video

While continuing to develop new 360 cameras, Insta360 however realized that 360 video could be utilized for something else than just regular 360 spherical video: Overcapture and subsequent reframing for “traditional”, “flat” video. What does this mean in plain English? Well, the original spherical video that is captured is much bigger in terms of resolution/size than the one that you want as a final product (for instance classic 1920×1080) which gives you the freedom to choose your angle and perspective in post production and even create virtual camera movement and other cool effects. Insta360 by no means invented this idea but they were clever enough to shift their focus towards this use case. Add to that the marketing gold feature of the “invisible selfie-stick” (taking advantage of a dual-lens 360 camera’s blindspot between its lenses), brilliant “Flow State” stabilization and a powerful mobile app (Android & iOS) full of tricks, you’ll end up with a significant popularity boost for your products!

The One X and the wait for a true successor

The one camera that really proved to be an instant and long-lasting success for Insta360 was the One X which was released in 2018. A very compact & slick form factor, ease of use and very decent image quality (except in low light) plus the clever companion app breathed some much-needed life into a fairly wrinkled and deflated 360 video camera balloon. In early 2020 (you know, the days when most of us still didn’t know there was a global pandemic at our doorstep), Insta360 surprised us by not releasing a direct successor to everybody’s darling (the One X) but the modular One R, a flexible and innovative but slightly clunky brother to the One X. It wasn’t until the end of October that Insta360 finally revealed the true successor to the One X, the One X2.

In the months prior to the announcement of the One X2, I had actually thought about getting the original One X (I wasn’t fully convinced by the One R) but it was sold out in most places and there were some things that bothered me about the camera. To my delight, Insta360 seemed to have addressed most of the issues that me (and obviously many others) had with the original One X: They improved the relatively poor battery life by making room for a bigger battery, they added the ability to connect an external mic (both wirelessly through Bluetooth and via the USB-C port), they included a better screen on which you could actually see things and change settings in bright sunlight, they gave you the option to stick on lens guards for protecting the delicate protruding lenses and they made it more rugged including an IPX8 waterproof certification (up to 10m) and  a less flimsy thread for mounting it to a stick or tripod. All good then? Not quite. Just by looking at the spec sheet, people realized that there wasn’t any kind of upgrade in terms of video resolution or even just frame rates. It’s basically the same as the One X. It maxes out at 5.7k (5760×2880) at 30fps (with options for 25 and 24), 4k at 50fps and 3k at 100fps. The maximum bitrate is 125 Mbit/s. I’m sure quite a few folks had hoped for 8k (to get on par with the Kandao Qoocam 8K) or at the very least a 50/60fps option for 5.7k. Well, tough luck.

While I can certainly understand some of the frustration about the fact that there hasn’t been any kind of bump in resolution or frame rates in 2 years, putting 8K in such a small device and also have the footage work for editing on today’s mobile devices probably wasn’t a step Insta360 was ready to take because of the possibility of a worse user experience despite higher resolution image quality. Personally, I wasn’t bothered too much by this since the other hardware improvements over the One X were good enough for me to go ahead and make the purchase. And this is where my own frustrations began…

Insta360 & me: It’s somewhat difficult…

While I was browsing the official Insta360 store to place my order for the One X2, I noticed a pop-up that said that you could get 5% off your purchase if you sign up for their newsletter. They did exclude certain cameras and accessories but the One X2 was mentioned nowhere. So I thought, “Oh, great! This just comes at the right time!”, and signed up for the newsletter. After getting the discount code however, entering it during the check-out always returned a “Code invalid” error message. I took to Twitter to ask them about this – no reply. I contacted their support by eMail and they eventually and rather flippantly told me something like “Oh, we just forgot to put the X2 on the exclusion list, sorry, it’s not eligible!”. Oh yes, me and the Insta360 support were off to a great start!

Wanting to equip myself with the (for me) most important accessories I intended to purchase a pair of spare batteries and the microphone adapter (USB-C to 3.5mm). I could write a whole rant about how outrageous I find the fact that literally everyone seems to make proprietary USB-C to 3.5mm adapters that don’t work with other brands/products. E-waste galore! Anyway, there’s a USB-C to 3.5mm microphone adapter from Insta360 available for the One R and I thought, well maybe at least within the Insta360 ecosystem, there should be some cross-device compatibility. Hell no, they told me the microphone adapter for the One R doesn’t work with the One X2. Ok, so I need to purchase the more expensive new one for the X2 – swell! But wait, I can’t because while it’s listed in the Insta360 store, it’s not available yet. And neither are extra batteries. The next bummer. So I bought the Creator Kit including the “invisible” selfie-stick, a small tripod, a microSD card, a lens cap and a pair of lens guards.

A couple of weeks later, the package arrived – no problem, in the era of Covid I’m definitely willing to cut some slack in terms of delivery times and the merchandise is sent from China so it has quite a way to Germany. I opened the package, took out the items and checked them to see if anything’s broken. I noticed that one of the lens guards had a small blemish/scratch on it. I put them on the camera anyway thinking maybe it doesn’t really show in the footage. Well, it did. A bit annoying but stuff like that happens, a lemon. I contacted the support again. They wanted me to take a picture of the affected lens guard. Okay. I sent them the picture. They blatantly replied that I should just buy a new one from their store, basically insinuating that it was me who damaged the lens guard. What a terrible customer service! I suppose I would have mustered up some understanding for their behaviour if I had contacted them a couple of days or weeks later after actually using the X2 for some time outdoors where stuff can quickly happen. But I got in touch with them the same day the delivery arrived and they should have been able to see that since the delivery had a tracking number. Also, this item costs 25 bucks in the Insta360 store, probably a single one or a few cents in production and I wasn’t even asking about a pair but only one – why make such a fuss about it? So there was some back-and-forth and only after I threatened to return the whole package and asked for a complete refund they finally agreed to send me a replacement pair of lens guards at no extra cost. On a slightly positive note, they did arrive very quickly only a couple of days later.

Is the Insta360 One X2 actually a good camera?

So what an excessive prelude I have written! What about the camera itself? I have to admit that for the most part, it’s been a lot of fun so far after using it for about a month. The design is rugged yet still beautifully simplistic and compact, the image quality in bright, sunny conditions is really good (if you don’t mind that slightly over-sharpened wide-angle look and that it’s still “only” 5.7k – remember this resolution is for the whole 360 image so it’s not equivalent to a 5.7k “flat” image), the stabilization is generally amazing (as long as the camera and its sensor are not exposed to extreme physical shakes which the software stabilization can’t compensate for) and the reframing feature in combination with the camera’s small size and weight gives you immense flexibility in creating very interesting and extraordinary shots.

Sure, it also has some weaknesses: Despite having a 5.7k 360 resolution, if you want to export as a regular flat video, you are limited to 1080p. If you need your final video to be in UHD/4K non-360 resolution, this camera is not for you. The relatively small sensor size (I wasn’t able to find out the exact size for the X2 but I assume it’s the same as the One X, 1/2.3″) makes low-light situations at night or indoors a challenge despite a (fixed) aperture of f/2.0 – even a heavily overcast daytime sky can prove less than ideal. Yes, a slightly bigger sensor compared to its predecessors would have been welcome. The noticeable amount of image noise that is introduced by auto-exposure in such dim conditions can be reduced by exposing manually (you can set shutter speed and ISO) but then of course you just might end up with an image that’s quite dark. The small sensor also doesn’t allow for any fancy “cinematic” bokeh but in combination with the fixed focus it also has an upside that shouldn’t be underestimated for self-shooters: You don’t have to worry about a pulsating auto-focus or being out of focus as everything is always in focus. You can also shoot video in LOG (flatter image for more grading flexibility) and HDR (improved dynamic range in bright conditions) modes. Furthermore, there’s a dedicated non-360 video mode with a 150 degree field-of-view but except for the fact that you get a slight bump in resolution compared to flat reframed 360 video (1440p vs. 1080p) and smaller file sizes (you can also shoot your 5.7k in H.265 codec to save space), I don’t see me using this a lot as you lose all the flexibility in post.

While it’s good that all the stitching is done automatically and the camera does a fairly good job, it’s not perfect and you should definitely familiarize yourself with where the (video) stitchline goes to avoid it in the areas where you capture important objects or persons, particularly faces. As a rule of thumb when filming yourself or others you should always have one of the two lenses pointed towards you/the person and not face the side of the camera. It’s fairly easy to do if you usually have the camera in the same position relative to yourself but becomes more tricky when you include elaborate camera movements (which you probably will as the X2 basically invites you to do this!).

Regarding the audio, the internal 4-mic ambi-sonic set up can produce good results for ambient sound, particularly if you have the camera close to the sound source like when you have it on a stick pointing down and you are walking over fresh snow, dead leaves, gravel etc. For recording voices in good quality, you also need to be pretty close to the camera’s mics, having it on a fully extended selfie-stick isn’t ideal. If you want to use the X2 on an extended stick and talk to the camera you should use an external mic, either one that is directly connected to the camera or plugged into an external recorder, then having to sync audio and video later in post. As I have mentioned before, the X2 now does offer support for external mics via the USB-C charging port with the right USB-C-to-3.5mm adapter and also via Bluetooth. Insta360 highlights in their marketing that you can use Apple’s AirPods (Pro) but you can also other mics that work via Bluetooth. The audio sample rate of Bluetooth mics is currently limited to 16kHz by standard but depending on the used mic you can get decent audio. I’ll probably make a separate article on using external mics with the X2 once my USB-C to 3.5mm adapter arrives. Wait, does the X2 shoot 360 photos as well? Of course it does, they turn out quite decent, particularly with the new “Pure Shot” feature and the stichting is better than in video mode. It’s no secret though that the X2 has a focus on video with all its abilities and for those that mainly care about 360 photography for virtual tours etc., the offerings in the Ricoh Theta line will probably be the better choice.

The Insta360 mobile app

The Insta360 app (Android & iOS) might deserve its own article to get into detail but suffice it to say that while it can seem a bit overwhelming and cluttered occasionally and you also still experience glitches now and then, it’s very powerful and generally works well. Do note however that if you want to export in full 5.7k resolution as a 360 video you have to transfer the original files to a desktop computer and work with them in the (free) Insta360 Studio software (Windows/macOS) as export from the mobile app is limited to 4K. You should also be aware of the fact that neither the mobile app nor the desktop software works as a fully-fledged traditional video editor for immersive 360 video where you can have multiple clips on a timeline and arrange them for a story. In the mobile app, you do get such an editing environment (“Stories” – “My Stories” – “+ Create a story”) but while you can use your original spherical 360 footage here, you can only export the project as a (reframed) flat video (max resolution 2560×1440). If you need your export to be an actual 360 video with according metadata, you can only do this one clip at a time outside the “Stories” editing workspace. But as mentioned before, Insta360 focuses on the reframing of 360 video with its cameras and software, so not too many people might be bothered by that. One thing that really got on my nerves while editing within the app on an iPad: When you are connected to the X2 over WiFi, certain parts of the app that rely on a data connection don’t work, for instance you are not able to browse all the features of the shot lab (only those that have been cached before) or preview/download music tracks for the video. This is less of a problem on a phone where you still can have a mobile data connection while using a WiFi connection to the X2 (if you don’t mind using up mobile data) but on an iPad or any device that doesn’t have an alternative internet connection, it’s quite annoying. You have to download the clip, then disconnect from the X2, re-connect to your home WiFi and then download the track to use.

Who is the One X2 for?

Well, I’d say that it can be particularly useful for solo-shooters and solo-creators for several reasons: Most of all you don’t have to worry much about missing something important around you while shooting since you are capturing a 360 image and can choose the angle in post (reframing/keyframed reframing) if you export as a regular video. This can be extremely useful for scenarios where there’s a lot to see or happening around you, like if you are travel-vlogging from interesting locations or are reporting from within a crowd – or just generally if you want to do a piece-to-camera but also show the viewer what you are looking at the same moment. Insta360’s software stabilization is brilliant and comparable to a gimbal and the “invisible” selfie-stick makes it look like someone else is filming you. The stick and the compact form of the camera also lets you move the camera to places that seem impossible otherwise. With the right technique you can even do fake “drone” shots. Therefore it also makes sense to have the X2 in your tool kit just for special shots, even if you neither are a vlogger, a journalist nor interested in “true” 360 video.

A worthy upgrade from the One X / One R?

Should you upgrade if you have a One X or One R? Yes and no. If you are happy with the battery life of the One X or the form factor of the One R and were mainly hoping for improved image quality in terms of resolution / higher frame rates, then no, the One X2 does not do the trick, it’s more of a One X 1.5 in some ways. However, if you are bothered by some “peripheral” issues like poor battery life, very limited functionality of the screen/display, lack of external microphone support (One X) or the slightly clunky and cumbersome form factor / handling (One R) and you are happy with a 5.7k resolution, the X2 is definitely the better camera overall. If you have never owned a 360 (video) camera, this is a great place to start, despite its quirks – just be aware that Insta360’s support can be surprisingly cranky and poor in case you run into any issues.

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter about important things that happened in the world of mobile video.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#27 No, you don’t need a second video track for storytelling! (… and why it really doesn’t matter that much anymore) — 25. June 2020

#27 No, you don’t need a second video track for storytelling! (… and why it really doesn’t matter that much anymore)

As I pointed out in one of my very first blog posts here (in German), smartphone videography still comes with a whole bunch of limitations (although some of them are slowly but surely going away or have at least been mitigated). Yet one central aspect of the fascinating philosophy behind phoneography (that’s the term I now prefer for referring to content creation with smartphones in general) has always been one of “can do” instead of “can’t do” despite the shortcomings. The spirit of overcoming obvious obstacles, going the extra mile to get something done, trailblazing new forms of storytelling despite not having all the bells and whistles of a whole multi-device or multi-person production environment seems to be a key factor. With this in mind I always found it a bit irritating and slightly “treacherous” to this philosophy when people proclaimed that video editing apps without the ability to have a second video track in the editing timeline are not suitable for storytelling. “YOU HAVE TO HAVE A VIDEO EDITOR WITH AT LEAST TWO VIDEO TRACKS!” Bam! If you are just starting out creating your first videos you might easily be discouraged if you hear such a statement from a seasoned video producer. Now let me just make one thing clear before digging a little deeper: I’m not saying having two (or multiple) video tracks in a video editing app as opposed to just one isn’t useful. It most definitely is. It enables you to do things you can’t or can’t easily do otherwise. However, and I can’t stress this enough, it is by no means a prerequisite for phoneography storytelling – in my very humble opinion, that is. 

I can see why someone would support the idea of having two video tracks as being a must for creating certain types of videography work. For instance it could be based on the traditional concept of a news report or documentary featuring one or more persons talking (most often as part of an interview) and you don’t want to have the person talking occupying the frame all the time but still keep the statement going. This can help in many ways: On a very basic level, it can work as a means for visual variety to reduce the amount of “talking heads” air time. It might also help to cover up some unwanted visual distraction like when another person stops to look at the interviewee or the camera. But it can also exemplify something that the person is talking about, creating a meaningful connection. If you are interviewing the director of a theater piece who talks about the upcoming premiere you could insert a short clip showing the theater building from the outside, a clip of a poster announcing the premiere or a clip of actors playing a scene during the rehearsal while the director is still talking. The way you do it is by adding the so-called “b-roll” clip as a layer to the primary clip in the timeline of the editing app (usually muting the audio of the b-roll or at least reducing the volume). Without a second video track it can be difficult or even impossible to pull off this mix of video from one clip with the audio from another. But let’s stop here for a moment: Is this really the ONLY legitimate way to tell a story? Sure, as I just pointed out, it does have merit and can be a helpful tool – but I strongly believe that it’s also possible to tell a good story without this “trick” – and therefore without the need for a second video track. Here are some ideas:

WYSIWYH Style

Most of us have probably come across the strange acronym WYSIWYG: “What you see is what you get” – it’s a concept from computational UI design where it means that the preview you are getting in a (text/website/CMS) editor will very much resemble the way things actually look after creating/publishing. If you want a word to appear bold in your text and it’s bold after marking it in the editor, this is WYSIWYG. If you have to punch in code like <b>bold</b>  into your text editing interface to make the published end result bold, that’s not WYSIWYG. So I dare to steal this bizarre acronym in a slightly altered version and context: WYSIWYH – “What you see is what you hear” – meaning that your video clips always have the original sound. So in the case of an interview like described before, using a video editing app with only one video track, you would either present the interview in one piece (if it’s not very long) or cut it into smaller chunks with “b-roll” footage in between rather than overlaid (if you don’t want the questions included). Sure, it will look or feel a bit different, not “traditional”, but is that bad? Can’t it still be a good video story? One fairly technical problem we might encounter here is getting smooth audio transitions between clips when the audio levels of the two clips are very different. Video editing apps usually don’t have audio-only cross-fades (WHY is that, I ask!) and a cross-fade involving both audio AND video might not be the preferred transition of choice as most of the time you want to use a plain cut. There are ways to work around this however or just accept it as a stylistic choice for this way of storytelling. 

One-Shot Method

Another very interesting way that results in a much easier edit without the need for a second video track (if any at all) but includes more pre-planning in advance for a shoot is the one-shot approach. In contrast to what many one-man-band video journalists do (using a tripod with a static camera), this means you need to be an active camera operator at the same time to catch different visual aspects of the scene. This probably also calls for some sort of stabilization solution like phone-internal OIS/EIS, a rig, a gimbal or at least a steady hand and some practice. Journalist Kai Rüsberg has been an advocate of this style and collected some good tips here (blog post is in German but Google Translate should help you getting the gist). As a matter of fact, there’s even a small selection of noticeable feature films created in such a (risky) manner, among them “Russian Ark” (2002) and “Viktoria” (2015). One other thing we need to take into consideration is that if there’s any kind of asking questions involved, the interviewer’s voice will be “on air” so the audio should be good enough for this as well. I personally think that this style can be (if done right!) quite fascinating and more visually immersive than an edited package with static separate shots but it poses some challenges and might not be suited for everybody and every job/situation. Still, doing something like that might just expand your storytelling capabilities by trying something different. A one-track video editing app will suffice to add some text, titles, narration, fade in/out etc.

Shediting

A unique almagam of a traditional multi-clip approach and the one-shot method is a technique I called “shediting” in an earlier blog post. This involves a certain feature that is present in many native and some 3rd party camera apps: By pausing the recording instead of stopping it in between shots, you can cram a whole bunch of different shots into a single clip. Just like with one-shot, this can save you lots of time in the edit (sometimes things need to go really fast!) but requires more elaborate planning and comes with a certain risk. It also usually means that everything needs to be filmed within a very compact time frame and one location/area because in most cases you can’t close the app or have the phone go to sleep without actually stopping the recording. Nonetheless, I find this to be an extremely underrated and widely unknown “hack” to piece together a package on the go! Do yourself a favor and try to tell a short video story that way!

Voice-Over

A way to tackle rough audio transitions (or bad/challenging sound in general) while also creating a sense of continuity between clips is to use a voice-over narration in post production, most mobile editors offer this option directly within the app and even if you happen to come across one that doesn’t (or like Videoshop, hides it behind a paywall) you can easily record a voice-over in a separate audio recording app and import the audio to your video editor although it’s a bit more of a hassle if you need to redo it when the timing isn’t quite right. One example could be splicing your interview into several clips in the timeline and add “b-roll” footage with a voice-over in between. Of course you should see to it that the voice-over is somewhat meaningful and not just redundant information or is giving away the gist / key argument of an upcoming statement of the interviewee. You could however build/rephrase an actual question into the voice-over. Instead of having the original question “What challenges did you experience during the rehearsal process?” in the footage, you record a voice-over saying “During the rehearsal process director XY faced several challenges both on and off the stage…” for the insert clip followed by the director’s answer to the question. It might also help in such a situation to let the voice-over already begin at the end of the previous clip and flow into the subsequent one to cover up an obvious change in the ambient sound of the different clips. Of course, depending on the footage, the story and situation, this might not always work perfectly.

Text/Titles

Finally, with more and more media content being consumed muted on smartphones “on the go” in public, one can also think about having text and titles as an important narrative tool, particularly if there’s no interview involved (of course a subtitled interview would also be just fine!). This only works however if your editing app has an adequate title tool, nothing too fancy but at least covering the basics like control over fonts, size, position, color etc. (looking at you, iMovie for iOS!). Unlike adding a second video track, titles don’t tax the processor very much so even ultra-budget phones will be able to handle it.

Now, do you still remember the second part of this article’s title, the one in parentheses? I have just gone into lengths to explain why I think it’s not always necessary to use a video editing app with at least two video tracks to create a video story with your phone, so why would I now be saying that after all it doesn’t really matter that much anymore? Well, if you look back a whole bunch of years (say around 2013/2014) when the phoneography movement really started to gather momentum, the idea of having two video tracks in a video editing app was not only a theoretical question for app developers, thinking about how advanced they WANTED their app to be. It was also very much a plain technical consideration, particularly for Android where the processing power of devices ranged from quite weak to quite powerful. Processing multiple video streams in HD resolution simultaneously was no small feat at the time for a mobile processor, to a small degree this might even still be true today. This meant that not only was there a (very) limited selection of video editing apps with the ability to handle more than just one video track at the same time, but even when an app like KineMaster or PowerDirector generally supported the use of multiple video tracks, this feature was only available for certain devices, excluding phones and tablets with very basic processors that weren’t up to the task. Now this has very much changed over the last years with SoCs (System-on-a-chip) becoming more and more powerful, at least when it comes to handling video footage in FHD 1080p resolution as opposed to UHD/4K! Sure, I bet there’s still a handful of (old) budget Android devices out there that can’t handle two tracks of HD video in an editing app but mostly, having the ability to use at least two video tracks is not really tied to technical restraints anymore – if the app developers want their app to have multi-track editing then they should be able to integrate that. And you can definitely see that an increasing number of video editing apps have (added) this feature – one that’s really good, cross-platform and free without watermark is VN which I wrote about in an earlier article.

So, despite having argued that two video tracks in an editing app is not an absolute prerequisite for producing a good video story on your phone, the fact that nowadays many apps and basically all devices support this feature very much reduces the potential conflict that could arise from such an opinion. I do hope however that the mindset of the phoneography movement continues to be one of “can do” instead of “can’t do”, exploring new ways of storytelling, not just producing traditional formats with new “non-traditional” devices.

As usual, feel free to drop a comment or get in touch on the Twitter @smartfilming. If you like this blog, consider signing up for my Telegram channel t.me/smartfilming to get notified about new blog posts and receive the monthly Ten Takeaways Telegram newsletter including a personal selection of 10 interesting things that happened in the world of mobile video during the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#24 Why Telegram is the best messenger app for sending video files — 7. June 2020

#24 Why Telegram is the best messenger app for sending video files

Ever since smartphones and mobile internet became a thing, messenger apps have grown immensely in popularity and significantly curbed other types of (digital) communication like SMS/texts, eMails and heck yes, phone calls, for most of us. There’s also little doubt about which messenger apps can usually be found on everyone’s phone: WhatsApp is by far the most popular app of its kind on a global scale with only Facebook Messenger being somewhat close in terms of users. Sure, if you look at certain regions/countries or age groups you will find other prominent messenger apps like WeChat in China, KakaoTalk in Korea, Viber in the Ukraine or Snapchat among the younger generation(s). We have also seen a noticeable rise in the popularity of security and data conscious alternatives like the Edward Snowden-recommended Signal or Switzerland-based Threema. One might say that right in between mass popularity and special focus groups sits Telegram.

Telegram started out in 2013, founded by Russian brothers Nikolai and Pawel Durow who had already created “Russia’s Facebook”, VK. While it was able to avoid being seen as “the Kremlin messenger”, its claims of providing an experience that is very strong in terms of security and data protection have received some flak from experts. It also came into questionable spotlight as the preferred modus communicandi of the so-called “Islamic State” and other extremist groups that want to avoid scrutiny from intelligence agencies. But this is just some general context and everyone can decide for herself/himself what to make of it.

The reason for this article has nothing to do with the aforementioned “historical” context but looks only at the app’s potentially useful functionality when it comes to media production, particularly video production. People are sending enormous amounts of video these days via their messenger apps. For reasons benefitting the sender/receiver as well as the app service provider itself, those videos are usually compressed, both in terms of resolution and bitrate. The compression results in smaller file sizes which lets you send/receive them faster, use up less storage space and avoid burning through too much mobile data. This works pretty well when all you do is watch the video in your messenger app, it’s far from ideal however if you want to work with the video somebody sent you.

While there is a way to prevent the app from automatically compressing your video by sending/attaching it not as a video (which is the usual way of doing it) but as a file (as you would normally add a doc or pdf), the file size limit of most messenger apps is so small that it’s not really suitable for sending video files that are longer than one minute. WhatsApp has a current file size limit of 100MB and so does Signal. Threema tops out at 50MB for sending files uncompressed while Facebook Messenger gives you a measly 25MB! Just for measure: a moderate bitrate of 16Mbit/s for a FHD 1920×1080 video will reach the 100MB limit at only 50 seconds. In this regard, Telegram is basically lightyears ahead of the competition as it lets you send uncompressed files up to 2 GB (around 2000 MB), yes you heard that right! 

Choose “File” and then “Gallery” (or another option if that’s where your media is located) to send a video in full quality without compression.

To send an uncompressed video file within Telegram, tap on the paper clip icon in a chat, select “File” (NOT “Gallery”) and then “Gallery. To send images without compression” (or choose one of the other options if your video file is located somewhere else on the device). It’s that easy! There’s also a cool way to use Telegram as your personal unlimited cloud storage: If you open the app’s menu (tapping on the three lines in the top left corner) you will find an option that says “Saved Messages”. This is basically your own personal space within the app where you can collect all kinds of material like notes, links or files. As long as the file doesn’t exceed 2 GB, you can upload it into this “self chat” like you would in a regular cloud storage service like Dropbox, GoogleDrive or OneDrive. And believe it or not, you currently get UNLIMITED storage for free! I think there’s a chance that Telegram might cap this at some point in the future if people start using it too excessively but up until then, this is a pretty amazing feature most users don’t know about (even I didn’t until a few days ago!).

Telegram gives you unlimited cloud storage, each file you upload can be up to 2 GB in size.

This benefit gets even more powerful when you consider that you can use Telegram across several devices (it’s not only available for Android, iOS and Windows 10 Mobile but also has desktop apps for Windows and MacOS!) with the same account, something you can’t do with other messengers like WhatsApp which ties you to a single mobile device for active use of one account. A side note though: If you have someone send you a big uncompressed video file over mobile data, you might want to tell the other person that it will cut into their mobile data significantly. So if possible, they should send it when logged into a WiFi network.

In-app video editor of Telegram.

And even if your goal is actually to compress a video when sending it, Telegram gives you the best choices to do so. When selecting a video via the Gallery button (instead of the File button) you can adjust the resolution of the clip by using the app’s recently updated in-app video editor. After marking your clip of choice by tapping on the empty circle in the top right corner of the video’s thumbnail, tap on the thumbnail itself to open the video editor. You will be able to trim the clip or add a drawing/text/sticker (brush icon). You can even do some basic color correction (sliders icon), I kid you not! And you can adjust the video resolution by tapping on the gear icon in the bottom left corner of the tool box. By moving the slider you can choose between FHD 1920×1080, HD 1280×720, SD 854×480 and what I will call “LD” (low definition) 480×270.

If your primary focus when using messenger apps is most comprehensive security / data protection or mass compatibility and you don’t need to use the app as a tool for direct (video) file transfer, then you might still prefer Signal, Threema or WhatsApp respectively. Otherwise Telegram is a powerful tool with best-in-class features for a professional video production workflow. 

So despite the fact that Telegram is still far from being as ubiquitous as WhatsApp or Facebook Messenger, it has significantly increased its user base in the last months and years (currently over half a billion installs from the Google Play Store!) and chances are getting better that the person sending video to you is using it or has at least installed it on her/his phone.

Questions and comments are welcome, either below in the comment section or on Twitter @smartfilming. I also just created my own Telegram channel which you can join here: https://t.me/smartfilming. You will be notified about new blog posts and receive the monthly Ten Takeaways Telegram newsletter including a personal selection of 10 interesting things that happened in the world of mobile video during the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

Download Telegram for Android on the Google Play Store.

Download Telegram for iOS on the Apple AppStore.

Download Telegram for Windows 10 Mobile / WindowsPhone on the Microsoft Store.

#14 “Shediting” or: How to edit video already while shooting on a smartphone — 17. May 2018

#14 “Shediting” or: How to edit video already while shooting on a smartphone

UI of Motorola’s native camera app (“MotoCam”) while recording video. Bottom right is the “pause” button that will let you pause the recording and resume it later if you don’t leave the app.

When using a headline like the one above, camera people usually refer to the idea that you should already think about the editing when shooting. This basically means two things: a) make sure you get a variety of different shots (wide shot, close-up, medium, special angle etc) that will allow you to tell a visually interesting story but b) don’t overshoot – don’t take 20 different takes of a shot or record a gazillion hours of footage because it will cost you valuable time to sift through all that footage afterwards. That’s all good advice but in this article I’m actually talking about something different, I’m talking about a way to create a video story with different shots while only using the camera app – no editing software! In a way, this is rather trivial but I’m always surprised how many people don’t know about it as this can be extremely helpful when things need to go super-fast. And let’s be honest, from mobile journalists to social media content producers, there’s an increasing number of jobs and situations to which this applies…

The feature that makes it possible to already edit a video package within the camera app itself while shooting is the ability to pause and resume a recording. The most common way to record a video clip is to hit the record button and then stop the recording once you’re finished. After stopping the recording the app will quickly create/save the video clip to be available in the gallery / camera roll. Now you might not have noticed this but many native camera apps do not only have a „stop“ button while recording video but also one that will temporarily pause the recording without already creating/saving the clip. Instead, you can resume recording another shot into the very same clip you started before, basically creating an edit-on-the-go while shooting with no need to mess around with an editing app afterwards. So for instance, if you’re shooting the exterior of an interesting building, you can take a wide shot from the outside, then pause the recording, go closer, resume recording with a shot of the door, pause again and then go into the building to resume recording with a shot of the interior. When you finally decide to press the „stop“ button, the clip that is saved will already have three different shots in it. The term I would propose for this is „shediting“, obviously a portmanteau of „shooting“ and „editing“. But that’s just some spontaneous thought of mine – you can call this what you want of course.

What camera apps will let you do shediting? On Android, actually most of the native camera apps I have encountered so far. This includes phones from Samsung, LG, Sony, Motorola/Lenovo, Huawei/Honor, HTC, Xiaomi, BQ, Wileyfox and Wiko. The only two Android phone brands that didn’t have this feature in the phone’s native camera app were Nokia (as tested on the Nokia 5) and Nextbit with its Robin. As for 3rd party video recording apps on Android, things are not looking quite as positive. While Open Camera and Footej Camera do allow shediting, most others don’t have this feature. FilmicPro (Android & iOS) meanwhile doesn’t have a “pause” button but you can basically achieve the same thing by activating a feature called “Stitch Recorded Footage” in the settings under “Device”.  There’s also MoviePro on iOS which lets you do this trick. Apple however still doesn’t have this feature in the iOS native camera app at this point. And while almost extinct, Lumia phones with Windows 10 Mobile / Windows Phone on the other hand do have this feature in the native camera app just like most Android phones.

EDIT: After I had published this article I was asked on Twitter if the native camera app re-adjusts or lets you re-adjust focus and exposure after pausing the recording because that would indeed be crucial for its actual usefulness. I did test this with some native camera apps and they all re-adjusted / let you re-adjust focus and exposure in between takes. If you have a different experience, please let me know in the comments!

Sure, shediting is only useful for certain projects and situations because once you leave the camera app, the clip will be saved anyway without possibility to resume and you can’t edit shots within the clip without heading over to an editing app after all. Still, I think it’s an interesting tool in a smartphone videographer’s kit that one should know about because it can make things easier and faster. If you have any questions or comments, do drop them below or find me on Twitter @smartfilming.

For an overview of all my blog posts click here.

#3 Medienproduktion mit Smartphone & Tablet: CONTRA — 13. January 2017

#3 Medienproduktion mit Smartphone & Tablet: CONTRA

thumbs-down-circle

CONTRA – Was für Nachteile und Probleme kann es geben?

1. Kein optischer Zoom
Smartphones und Tablets können vieles, aber nicht alles. Ein zentraler Schwachpunkt gegenüber regulären (Video-)Kameras ist z.B. der fehlende optische Zoom, mit dem sich weit entfernte Motive näher heranbringen und schnelle/bequeme Einstellungswechsel vornehmen lassen. Kamera-Apps bieten mitunter einen digitalen Zoom an, dieser sollte jedoch im Normalfall nicht verwendet werden, da das Bild nur elektronisch vergrößert wird und die Bildqualität abnimmt. Samsung hat mit dem Galaxy S4 Zoom und dem K Zoom zwar zwei sehr interessante (von der Android-Software jedoch mittlerweile recht veraltete) Smartphones mit 10-fachem optischen Zoom auf den Markt gebracht, auch Asus hat mit dem ZenFone Zoom (3x) ein Experiment gewagt. Der Rest des Marktes muss jedoch (noch) ohne auskommen. Der Grund dafür ist recht simpel: Beim momentanen Stand der Technik würde ein optischer Zoom das Smartphone wesentlich dicker machen als es gerade en vogue ist. Für bestimmte Tätigkeiten ist der Zoom allerdings unablässig. So ist es z.B. bei der Aufzeichnung von Theaterstücken, Sportereignissen und anderen Events oft nicht möglich/gewünscht, für eine Nahaufnahme auf die Bühne oder das Spielfeld zu gehen. Doch auch wenn man die Möglichkeit hat, sich für Nahaufnahmen problemlos näher an ein Motiv zu begeben, ist der Zoom grundsätzlich eine bequemere und schnellere Lösung, was gerade für Journalisten, bei denen es oft auf Schnelligkeit ankommt, sehr von Vorteil sein kann. Aufgrund des fehlenden optischen Zooms muss man Personen mitunter auch sehr nah “auf die Pelle” rücken, um eine Nahaufnahme zu bekommen, was für diese unangenehm sein kann. Nun ist es zwar möglich, speziell für Smartphones entwickelte kleine Teleobjektive mittels einer Halterung vor die Smartphone-Linse zu bauen, dieses Prozedere ist aber nicht immer sehr praktikabel und funktioniert auch nicht mit allen Smartphones (gleich gut). Interessant ist jedoch ein neuer Ansatz mit zwei Kamera-Linsen in der Rückseite des Smartphones, die jeweils eine unterschiedliche Brennweite haben und damit als eine Art optischer Zoom dienen, wenn auch nur in sehr begrenztem Rahmen: Apples neues iPhone 7+ geht diesen Weg, ebenso Asus’ kommendes ZenFone 3 Zoom. Damit kann der Formfaktor des Handys flach gehalten werden, allerdings gibt es wegen der zwei Festbrennweiten (beim iPhone 7+ sind es 28 bzw. 56mm, also ein 2x Zoom) keine Zwischenwerte (also kein kontinuierliches Zoomen) und ein Zoom-Faktor von 2 ist auch nicht gerade weltbewegend.

2. Lichtempfindlichkeit des Sensors
Eine weitere Schwachstelle von Smartphone-Kameras ist die Lichtempfindlichkeit des Kamerasensors unter schlechten Lichtbedingungen. Während Smartphone-Kameras bei gutem Licht in vielen Fällen auf Augenhöhe mit dezidierten Kameras agieren können, tritt aufgrund des meistens relativ kleinen Sensors bei ungünstigen Lichtverhältnissen recht schnell ein unschönes Bildrauschen auf (vor allem, wenn die Kameraapp im Automatik-Modus läuft). Selbstverständlich gibt es aber auch hier zunehmend Fortschritte und ein Vorteil des oben angesprochenen fehlenden optischen Zooms ist immerhin, dass man kein Licht durch das Zoomen verliert. Die Technik von Zoom-Optiken bringt es nämlich in der Regel mit sich, dass man im Telebereich zunehmend Blendenstufen und damit Licht verliert.

3. Ergonomie
Wesentlich weniger gewichtig als die ersten beiden Punkte, aber in manchen Situation doch von Nachteil ist die Ergonomie eines Smartphones als Kamera. Während Videokameras oder (filmende) Fotoapparate prinzipiell so konzipiert sind, dass man sie gut und sicher in der Hand halten und bedienen kann, ist das bei Smartphones wegen des (durchaus verständlichen) Trends zu möglichst flachen und kompakten Designs sowie deren eigentlichen Zweck nicht wirklich der Fall. Zwar gibt es mittlerweile spezielle Smartphone-Rigs und Halterungen, die dem Smartphone im Handling wieder mehr Griffigkeit verleihen; auch macht sich diese Schwäche im Wesentlichen nur beim Filmen aus der Hand bemerkbar, weniger bei der Nutzung eines Stativs. Allerdings kann man noch über zwei weitere Punkte in Sachen Handling diskutieren: Der Touchscreen eines Smartphones ist zweifelsohne wegen seiner Vielseitigkeit bei der Darstellung eine geniale Sache, allerdings haben physikalische Knöpfe wie man sie an dezidierten Kameras vorfindet, durchaus ihre Vorteile, da man sie besser “blind” und mit kalten Fingern oder Handschuhen bedienen kann. Außerdem vertippt man sich auf einem Touchscreen oft leichter als bei echten Knöpfen – das kann im Eifer des Gefechts schon mal recht ärgerlich sein. Im Übrigen nutzen nur wenige Smartphone-Modelle und Apps die verbliebenen physikalischen Knöpfe bei der Aufnahme von Video. Zuletzt sei gesagt, dass es in bestimmten Situationen etwas hinderlich ist, dass der Bildschirm nicht von der Kamera entkoppelt ist, d.h. dass man den Bildschirm nicht unabhängig von der Kamera bewegen kann. Dies erweist sich dann als Mangel, wenn man Kameraeinstellungen nah am Boden oder über dem Kopf wählt.

4. Kein Sucher
Der interne Sucher (Englisch: Electronic View Finder oder EVF) einer Video- oder Fotokamera ist dann hilfreich, wenn man bei starker Sonneneinstrahlung filmt und die Helligkeit des äußeren Displays nicht dafür reicht, die Belichtung zuverlässig zu beurteilen. Smartphones verfügen (aus wohl verständlichen Gründen) nicht über einen Sucher, der vom Umgebungslicht abgeschirmt ist. Je nach Situation kann man sich damit behelfen, die Bildschirmhelligkeit auf das Maximum zu stellen (zehrt natürlich am Akku), einen Sonnenschutz als Zubehör zu kaufen/selbst zu basteln oder auf die automatische Belichtung des Smartphones/der App zu vertrauen.

5. Systemstabilität
Natürlich hängt die Systemstabilität bei der Medienproduktion mit Smartphones von zahlreichen, unterschiedlichen Faktoren ab: Smartphone-Modell, Betriebssystem, Betriebssystem-Version, App, App-Version etc. Generell kann man jedoch anmerken, dass die Vielseitigkeit des Smartphones als Telefon, Mini-Computer, Video- und Fotokamera, Musikplayer etc. auch dazu geführt hat, dass das Gerät und seine Funktionalität extrem komplex geworden ist, obwohl der Nutzer das meistens nicht wahrnimmt, da die Benutzeroberfläche fast immer sehr einfach und intuitiv gestaltet ist. Je komplexer ein System, desto größer ist die Gefahr von möglichen Komplikationen. Während eine Videokamera (die natürlich auf ihre Weise auch ein sehr komplexes Stück Technik ist) eben einen Hauptzweck hat und darauf ausgerichtet und getestet wurde, ist das bei einem Smartphone nicht der Fall. Beim Smartphone kann es eher sein, dass das Betriebssystem oder die App einen Fehler hat, der in der Fülle der Funktionen des Gerätes und der Komplexität des gesamten Systems einfach übersehen wurde. Ich würde also behaupten, dass eine Videokamera in aller Regel zuverlässiger ist, wenn es um das Aufzeichnen von Video geht. Natürlich kann es auch bei Videokameras zu Fehlfunktionen kommen und im Bereich der Smartphones haben sich bestimmte Geräte/Modelle als zuverlässiger erwiesen als andere. Trotzdem sollte dieser Punkt nicht ignoriert werden.

FAZIT
Die Vorstellung vom Smartphone als Schweizer Taschenmesser für AV-Medienproduktion trifft den Nagel recht gut auf den Kopf: Seine Vielseitigkeit, Kompaktheit und intuitive Nutzbarkeit eröffnen ungeahnte Möglichkeiten auch für weniger Technik-affine Menschen. Es ist schlichtweg fantastisch, was für qualitativ hochwertiger AV-Content heute mit einem einzigen Gerät, das zudem fast jeder jederzeit zur Hand hat, machbar ist. Man sollte sich jedoch bewusst sein, dass das Smartphone gerade in seiner Vielseitigkeit aber eben auch bestimmte Schwachstellen besitzt, weil es eben nicht für den EINEN bestimmten Zweck ausgerichtet und dahingehend nicht “perfektioniert” wurde. Dies sollte jedoch niemanden davon abhalten, sich auf diesem Gebiet auszuprobieren. Wer die Schwächen kennt, kann sie in vielen Fällen umschiffen und die zahlreichen Stärken gewinnbringend ausspielen. Die z.T. rasanten und hoch-kreativen Entwicklungen in diesem Bereich werden es in jedem Fall wert sein, dran zu bleiben.

#2 Medienproduktion mit Smartphone & Tablet: PRO — 29. July 2015

#2 Medienproduktion mit Smartphone & Tablet: PRO

IMG_0822

Technik bietet sich ja bekanntermaßen besonders dafür an, einer leicht irrationalen Faszination zu erliegen und die Dinge einfach nur wegen ihrer unglaublichen Möglichkeiten und Funktionen zu bestaunen – die Frage nach der tatsächlichen Nützlichkeit und Praktikabilität eines neuen Gadgets verschwimmt nicht selten in freudentränendurchnässten Funkelaugen. Im Bereich der Kunst kennt man die Redewendung des L’art pour l’art – der Kunst um der Kunst Willen, ohne eigentlichen praktischen Zweck. Nun ist aber das Smartphone, so wie wir es hier einordnen wollen, keine Kunst. Vielleicht ist das, was man damit kreiert Kunst, aber nicht das Gerät selber. Das Gerät ist vielmehr das Werkzeug, um etwas zu erschaffen (Kunst wäre eine Möglichkeit), und ein Werkzeug – darüber werden wir uns wohl leichter einig als beim Thema Kunstwerk – sollte dann doch irgendwie einen bestimmten Zweck erfüllen. Zudem darf die Frage angeschlossen werden, welchen Mehrwert ein neues Werkzeug gegenüber älteren, bereits existierenden hat. Und hier geht es dann auch schon konkret ans Eingemachte: Was für Vorteile bietet die Medienproduktion mit Smartphones und Tablets im Vergleich mit traditionelleren Gerätschaften wie dezidierten Aufnahme- und Bearbeitungsgeräten (Videokameras, Fotoapparate, Audiorekorder, Schnitt-PC etc.)? Und – das sollte ebenfalls Erwähnung finden – wo liegen die Schwachstellen? Eine hilfreiche Maxime in diesem Gebiet ist auf jeden Fall, dass man idealerweise immer das richtige Werkzeug für den entsprechenden Job hat – und das kann von Fall zu Fall variieren. Das Smartphone ist in vielerlei Hinsicht eine Art “digitales Schweizer Taschenmesser”, welches Unmengen an nützlichen Funktionen in sich vereint – aber ob ein Taschenmesser das beste Werkzeug dafür ist, die größte und gewaltigste Eiche im Wald zu fällen, mag eher bezweifelt werden. Monty Python würde es aber wohl sogar mit einem Hering schaffen. Ein Pro & Contra in zwei Teilen.

Thumbs-Up-Circle

PRO – Was spricht für die Medienproduktion mit Smartphones und Tablets?

1. Mobilität
Der naheliegenste Vorteil eines Smartphones dürfte mit Sicherheit seine geringe Größe und allgegenwärtige Verfügbarkeit sein. Es ist klein und die meisten tragen es ständig bei sich. Man muss also keine Kameratasche mit sich herumschleppen. Besonders bei spontanen Ideen oder unvorhergesehenen Ereignissen und Situationen kann man schnell reagieren ohne sich erst woanders Equipment besorgen zu müssen. In diesem Sinne bietet sich das Smartphone natürlich gerade für Journalisten/Reporter an.

2. Vereinfachter & schnellerer Workflow
Während der Mobilitätsfaktor sicher von großer Bedeutung ist, halte ich einen anderen Punkt für noch faszinierender: Zum ersten Mal in der Geschichte gibt es ein Multimedia-Gerät, welches die verschiedenen Stationen eines kompletten Produktionsprozesses alle in sich vereint: Planung – Dreh/Aufnahme – Materialtransfer – Bearbeitung – Bereitstellung. Wenn man will, könnte man die Liste auch noch um den Bereich “Rezeption” erweitern, denn zunehmend werden Medieninhalte ja auf Smartphones konsumiert. “Planung” ist mit Sicherheit der weitläufigste Begriff in diesem Feld, weshalb ich nicht näher darauf eingehen werde, aber diverse Organisations-Apps (z.B. Evernote) oder einfach die Möglichkeit zur Recherche und Information via Internet (Wetter, Örtlichkeiten, Personen, Ereignisse etc.) können hier hilfreich sein. Dank integrierter Kamera und Mikrofon lassen sich mit dem Smartphone problemlos Video-/Foto-/Audioaufnahmen machen (die Qualität hängt natürlich vom jeweiligen Gerät ab). Im Gegensatz zum klassischen Workflow mit Kamera und Computer entfällt nun der Dateitransfer (wer noch mit Bändern gearbeitet hat, der weiß wie nerven- und zeitaufreibend das sein kann) und man hat mittels zahlreicher Apps die Möglichkeit, die Dateien (sei es Video, Foto, Audio) zu bearbeiten und einen Beitrag zusammenzustellen. Da das Smartphone an das (mobile) Internet angeschlossen ist (also ein(e) connected device/connected camera darstellt), können fertige Beiträge auch gleich versendet oder auf der entsprechenden Publikationsplattform bereitgestellt werden. Mit der Einführung des 3G-Netzwerkes, bzw. dem jetzigen Übergang zu 4G/LTE steht auch eine ausreichend schnelle Mobilfunk-Infrastruktur zu Verfügung, um das unproblematische Versenden von AV-Mediendateien zu gewährleisten. Der gestraffte Workflow spart Zeit und Ressourcen, inbesondere im Nachrichtenbereich kann das von unschätzbarem Wert sein.

3. Multifunktionalität
Verschiedene Apps und Funktionen machen das Smartphone zu einer Art “digitalem Schweizer Taschenmesser”. Neben dem typischen Workflow eines AV-Beitrages wie unter Punkt 2 beschrieben, lässt sich das Gerät oftmals noch zu ganz anderen Zwecken verwenden: Als zusätzliche Lichtquelle mit der Taschenlampenfunktion, als Teleprompter mit einer Prompter-App, als separater Audiorekorder mit einem Ansteckmikro in der Hosentasche einer Person für besseren Ton, als Monitor für eine Action-Kamera, für Livestreaming etc. etc.

4. connected device – ein mit dem Netz verbundenes Gerät
Ein zentraler Vorteil gegenüber ‘normalen’ Kameras und anderen Aufnahmegeräten ist, wie weiter oben unter Punkt 2 schon erwähnt, die Verbindung mit dem Internet. Deshalb lassen sich nicht nur Dateien schnell von unterwegs verschicken, sondern man hat eben auch direkten Zugang zu den sozialen Netzwerken (in der Regel über speziell für die mobile Nutzung entwickelte Apps), die ja zunehmend als Veröffentlichungsplattform für Medieninhalte dienen.

5. Ausreichende Qualität
Mittlerweile kann so gut wie jede Smartphone-Kamera Video in HD (720p) oder FullHD (1080p) aufnehmen, selbst 4K ist schon bei einigen Modellen möglich. Auch Fotos und Audioaufnahmen können prinzipiell in einer Qualität gemacht werden, die “good enough” für viele Zwecke ist. “good enough” bedeutet, dass die Qualität ausreichend ist, obwohl man mit anderen Geräten (technisch gesehen) noch bessere Ergebnisse erzielen könnte, deren Qualitätszugewinn jedoch kaum wahrgenommen oder zumindest nicht als zwingend notwendig erachtet wird.

6. Diskretion
Nicht nur für einen selbst kann die kleine, unscheinbare Form und die allgemeine Vertrautheit mit dem Gerätetyp angenehm sein, auch ein Gesprächspartner fühlt sich unter Umständen wohler, wenn ihm keine große Kamera gegenübersteht. Möglicherweise bekommt man also auf diese Weise bessere und interessantere Aufnahmen zustande.

7. Erweiterungsmöglichkeiten durch Zubehör
Wie unter CONTRA noch zu lesen sein wird, geht mit einer Multifunktionalität im Allgemeinen auch eine eingeschränkte Funktionalität im Speziellen einher. In vielen Bereichen haben jedoch die Hersteller von Zubehör einige Lücken geschlossen oder zumindest verkleinert. Das vielleicht naheliegenste und meistgekaufteste Helferlein ist dabei eine Halterung, mit der sich das Smartphone auf einem handelsüblichen Stativ anbringen lässt, um unverwackelte Aufnahmen zu meistern oder sich selbst als Akteur vor die Kamera zu bringen, wenn kein separater Kameramann zu Hand ist. Daneben gibt es u.a. spezielle externe Mikrofone, bzw. Adapter, Objektive/Linsen, Stative, Halterungen und vieles mehr.

8. Spezielle Perspektiven
Nicht zu verachten ist schließlich noch die Aussicht auf ganz spezielle Kameraperspektiven, die sich aus der kompakten Form des Gerätes ergeben. Ein Smartphone lässt sich oftmals an Orten unter- oder anbringen, die anderen Kameras wegen ihrer Größe oder wegen ihres Gewichtes unzugänglich bleiben.

Demnächst: #3 Medienproduktion mit Smartphone & Tablet: CONTRA