smartfilming

Exploring the possibilities of video production with smartphones

#50 Anniversary Post: 50 Voices of Phoneography — 15. November 2021

#50 Anniversary Post: 50 Voices of Phoneography

The good thing about numbering your blog posts is that it’s easy to figure out when you have an anniversary coming up… 😉 And now’s the time! To be honest, I wasn’t really sure I would get that far when I started smartfilming.blog in the summer of 2015 with my first articles in German. I had made my initial steps in the blogosphere in 2009 writing about something completely different but discontinued the project two years later when I realized my interest in the topic was fading. Well, I have shown more stamina this time around: 6 years and 50 blog posts! (in case you want to check out an overview of the previous 49 click here) So what to do for this happy occasion? I’m quite reluctant to the idea of patting myself on the back. Instead, I thought it would be more insightful to point the spotlight away from myself, towards the community at large and some of its preeminent members, many of which I am in regular contact with and really enjoy talking to and yes, sometimes arguing with. I decided to pick 50 creators, thinkers and edcuators and ask them three questions on the topic of content production with smartphones / mobile devices, something I pesonally like to call phoneography because it includes different aspects like video, audio, photo, graphics, text etc – all created with a phone. I know this was quite an undertaking and it definitely resulted in the most expansive blog post ever around here, but hey, I think it’s really worth checking out, there’s just so much wisdom, insight and also a very personal story to find in every single contribution! As I have met almost all of the contributors through Twitter and much of the conversation around this topic is happening there, I have linked their names to their Twitter accounts, from there you can easily click through to everyone’s personal and/or professional websites and/or other social media channels. I would sincerely like to thank all of them.

So here are the three questions that I asked everyone:

  1. When and how/why did you discover/choose a smartphone as a media production tool?
  2. What’s currently the biggest limitation and/or what new ability are you looking forward to the most?
  3. Where do you see this movement going within the next 5 to 10 years?

——————————————————————————————

Glen MulcahyInternational Trainer/Speaker and Managing Director Mojofest Ltd.

  1. My first phone that I used as a camera was a Nokia N93i. I commissioned a report for our newsroom to see if it was possible and to assess the quality. It was pretty awful. N93i had a 2mpx cam and used some weird mobile/wap video codec that Avid took AGES to transcode. Next attempt was with iPhone 4 when I was teaching video journalism on a European training course in Budapest. That report was shot in Filmic Pro, edited in iMovie and sent via FTP to RTÉ for assessment and in many ways was the beginning of the RTÉ mojo project. 
  2. I have yet to try “cinematic” mode in the new iPhone 13 Pro Max and am certainly curious/excited about the aesthetic. I think ProRes is a huge step for professional content, particularly for broadcast applications. If we can address low light performance and zoom either using either periscope lens tech or hardwired camera accessories I think the rubicon will be crossed. 
  3. Truthfully I’m stunned that newsrooms across a host of different media at this stage have not fully embraced mobile as a holistic content production platform. Ie not just from making the content but also for distributing the content. For me the ultimate opportunity with mobile is using the low price point to put more content creators in the field (focus on the storytellers not the gear) to be able to reach into more communities and serve a broader audience, to be able to restore local news from the way that it’s been decimated over the last two decades.

Leonor SuárezVideojournalist and Mobile Journalism University Lecturer

  1. I started using a smartphone to produce stories because it’s a tool that’s always in my pocket. So I can tell the stories I come across whenever/wherever. Also, I can easily do the whole production/journalistic work and send it to my tv station to be aired immediately. It gives me complete freedom to do my work as a journalist and storyteller.
  2. The zoom can be a challenge when producing some kind of stories. Also, I’m not always happy with the outcome when using external ND filters.
  3. I don’t know how the tool will look like. The only thing I’m sure is that mobile journalism won’t go back. More and more, journalists need to produce and share videos to tell stories and they need portable and easy to use gear to do it as fast as possible, regardless where, when and what the circumstances are.

Juan Carlos BagnellProducer/Reviewer

  1. I’d dabbled a lot before it, but the LG V20 sealed the deal for me on a flight from LA to NY. I had to get some articles written, and gave it a shot with the V20 and a Bluetooth keyboard. It was SO much easier than lugging out a laptop. I could still watch a movie while I typed, and I had room on my tray for a beverage. Since then, I’ve actively been trying to do more from my phones. It’s refreshing to bring a handful of phone accessories to cover an event instead of THOUSANDS in studio gear to use on location.
  2. Software, software software! We’ve been grossly over-buying compute power for years. We need developers to take this pocket power more seriously. There’s so much more we COULD be doing.
  3. I HOPE we see more market disruption and modular thinking. A slate with radios and processors that can interact with other professional equipment like a Sony Xperia Pro, or replace consumer compute needs like Samsung Dex. Professional SHOULD mean being more adaptable to the actual needs of pros. However, our futures are somewhat entwined with companies that will ultimately decide whether they will disrupt their own existing product and profit lines. I fear it’s just as likely that mobility could take a more significant turn towards “average consumer” if folks don’t vote for features with their wallets. Samsung’s folding strategy is a perfect example. Why keep investing in Dex if significantly more people buy Z Flip instead of Z Fold? Why sell a consumer ONE incredible flexible product when you can get them to buy multiple compute devices with artificial limits on what those products can do?

Courtney G. JonesProducer/Director/Head of Development Macaroni Art Productions

  1. I first decided to use a smartphone on a feature film called ‘Wood Witch: The Awakening’ in 2015 after hearing about ‘Searching for Sugar Man’ (2012) and ‘Tangerine’ (2015).
  2. I really look forward to better low-light performance in smartphones, whether by using bigger sensors or computational video or both.
  3. In the next 5+ years, I see mobile phones usurping DSLRs and mirrorless cameras. It might even happen sooner!

Bianca-Maria RathayMobile Videographer, Mobile Journalist and Trainer

  1. It all started with a mobile reporting workshop with Matthias Sdun in 2014. It was so much fun that I thought afterwards this could be my niche in journalism. I bought some gear online (fun fact: it took me one and a half hours to explain the people at customs in Hamburg what it was until they finally cleared it (- no the metal case wasn’t a weapon) and practiced. Also I told lots of people about it and when I got my first clients I figured this would work. Eventually the business moved more towards PR and companies.
  2. Biggest limitation: zooming (at least with most smartphones) and decent shallow depth of field. Having said that the biggest new ability to look forward to: computational shallow depth of field (nonetheless albeit with limitations).
  3. The mojo movement is one that I really treasure. Smartphone filming and mobile journalism will continue to get better, also technology-wise. Nevertheless I guess the future is leading away from the device in my opinion towards the skills to know how to tell compelling stories for each platform and how to adapt technology no matter what camera you’ve got available. The smartphone will probably be replaced by wearables in the future but “story trumps device” has been the case all along.

Mark RobertsonMobile and Video Journalism Trainer for the BBC Academy

  1. I first used a smartphone for video in 2009 to gather audio for radio news, but as a byproduct I had to record video to be able to get the audio. This was on a Nokia N95. You can see the video here.
  2. The biggest limitation is probably battery and memory. Though both of those are easily overcome with planning and care. Outside of that it’s that the smartphone is still not seen a a “proper camera” despite its technology been much in advance of many more traditional style cameras, or even now, in some cases the ability to shoot very high quality video formats.
  3. The future is becoming more mainstream as people realise the advantages of mobile filming and the democratisation it brings – you don’t need the latest greatest bit of kit from the big camera makers, the device in your pocket does just as well; and the price of those devices just keeps dropping, allow more and more people access to video story telling tools.

Simon HorrocksWriter/Filmmaker/YouTuber

  1. When Andrea Holle told me about her idea for a film festival showing only films shot on smartphones.
  2. Shallow depth of field.
  3. Not sure if it’s really a movement. Rather a change brought about by new technology. Like when people started driving cars instead of a horse and cart. Cars are a lot more convenient.

Eleanor MannionDigital Native Video Journalist with RTÉ News

  1. Glen Mulcahy ran a 5-day mobile journalism course at RTÉ in 2014. By the last day of the course I had filmed and edited my first ever mojo report and have never looked back since!
  2. I think one of the best advantages of mobile journalism is the apps which enhance your filming/production/workflow. Apps such as Filmic Pro, LumaFusion and Mojo are improving and widening their functionality continually. The iPhone tele lens was my favourite new ability in recent years and I always look forward to any lens improvements.  Any improvements when filming with limited light are always welcome!
  3. In the next five years, my team and I will continue to adapt and innovate with our digital-first platforms and our audience in terms of how they are consuming their news, and where they are consuming their news. We constantly revisit and reassess our basic purpose – made on mobile for mobile.

Blake CalhounFilmmaker

  1. I’ve always been interested in using readily available & affordable filmmaking tools and pushing them beyond what they’re really meant to do. That started with shooting 16mm in the late 90s to embracing Mini DV in the early 2000s, and then onto HDV cameras and later DSLRs. Then in late 2011 the iPhone 4s was released and that was the first time I thought that smartphone video was getting pretty good and was actually going to be a thing – and that’s the same year I started my YouTube channel on that topic. 
  2. The biggest limitation for me using an iPhone right now is a very technical one and that’s – dynamic tone mapping. Basically the phone will change the exposure settings even when things are locked – and that’s using the native camera app or a third-party app like Filmic Pro. It’s extremely frustrating as it limits what you can do, especially shooting professional video. I do think Apple will eventually fix this, but right now it’s definitely an issue and keeps me from using my phone on more projects. I’m really looking forward to seeing if Apple will add USB-C or Thunderbolt to the next iPhone Pro series phones. Considering they’ve now added the ability to shoot ProRes video and the files can be huge, so it really only makes sense to include this for file transfers (right now it’s very slow using Lightning or AirDrop). The other thing this would likely do is allow apps like Filmic Pro to send out a high-quality uncompressed 4K video signal so you could then record that to an Atomos recorder, etc. and not have to worry about the phone’s internal storage.
  3. I’ve said for a while that I think smartphones are on a similar path as DSLRs and mirrorless cameras. Those traditional cameras have been evolving over the past 14 or so years, and smartphone video has really only been what I’d consider prosumer quality since about 2016 – and only REALLY good since 2020 with the introduction of 10-bit video and HDR in the iPhone 12 Pro series devices (Android too, but I don’t know their history as well). So I think in the next 5 or 10 years, which is a very long time in the tech world of course, we could see smartphones replace many (prosumer) video cameras just like they have with all the point-and-shoot stills cameras. The quality in many cases is already there now. The bigger issue is probably acceptance by filmmakers and clients. It can still be taboo to shoot professional work using a phone, which I do understand, but it was exactly like that with DSLRs, too. And today those are not only accepted, but almost ubiquitous. So who knows what the future holds for video tech, but I do think smartphones will play an even larger role for many of us.

Robb MontgomeryFounder & Director Smart Film School

  1. In 2007 at the Canadian News Association conference I presented a keynote titled “Web Video is not TV” to several hundred news executives and showed how a popular music video (featuring a Canadian pop singer) was shot entirely on Nokia mobile phones. I screened the Behind the scenes reel that showed the clever filmmakers revealing their tricks and these editors were blow away about the quality of the video camera they already had in their pockets. I was already teaching video journalism to many of their reporters, so it was a natural transition to switch from shooting with prosumer camcorders to smartphones.
  2. The biggest limitation for journalists and reporters making good quality video reports and documentaries is the confusion that comes along with the phone manufacturers introducing so many overhyped video formats and technical variables. The choices for resolution, color space, codec, and bit rates can be overwhelming. I am always looking to develop better guidance for these folks for when it makes sense to shoot 4K vs HD, enable or disable HDR, DOLBY ATMOs, PRORES, and LOG/RAW shooting. Most Video Journalists just need simple, reliable, bulletproof settings that preserve storage and capture high quality footage.
  3. In the same direction as I illustrated in that 2007 presentation. Mobile video storytelling is a language and way of seeing and sharing big stories with small cameras. A student just won Best Documentary film at the most recent Mobile Journalism Awards competition. Her mobile film bested many others that were shot by pros working at large broadcast news organizations. That is proof that the vector is moving in the right trajectory – #Mojo is available to every storyteller who is willing to tell the truth without fear or favor. The best films from each #MojoAwards season are curated for scholars, journalists and the public to review. It will be interesting to examine that body of mobile journalism work as it grows over the next 10 years.

Jack HollingsworthPhotographer/Author/Speaker/Lovecat

  1. It was February 18, 2011, on the idyllic Caribbean island of Barbados. It was love at first sight.
    In less than a year, I went from infatuation to obsession, from fun-toy to production-tool, from casual to intentional, from snapshot to photograph, from taking to making pictures. Since that serendipitous rendezvous with destiny, over the past 10 years, I have shot over 1 million iPhone photographs, on 10 different devices, in 50 countries of the world. And also, in the process, have humbly become, a leading expert in iPhone photography, to discriminating and discerning photographers, around the world.
  2. I’ve had a camera in my hands since 1975. Almost, from the beginning, it was never my exclusive intention to simply own and operate a photographic business. Instead, I wanted to life a photographic life. The iPhone camera with its ease of use, simplicity, convenience, small form-factor, computational power, has got me one step closer to realizing this lifestyle dream. With phone-camera in hand, I’m no longer just a photographer but a storyteller.
  3. We are living in the Golden Age Of Consumer Photography. Yes, dedicated-cameras still own the commercial photography space. But phone-cameras own the consumer photography space. I am convinced, without a shadow of a doubt, that the iPhone camera, will go down in history, as the most influential camera of all time. I exclusively shoot with iPhone cameras and have no interest or motivation to change. The sky is the limit.

Dougal ShawDigital Business Reporter BBC News

  1. Around 2015 I was using my iPhone as camera B for shooting news feature interviews. Looking at the footage in the edit I thought it was so good, I started to entertain the idea of just shooting everything with the phone. I tried doing it a few times and found that it made my kit so much lighter. I became more nimble and it really freed up my film-making. I felt liberated so I didn’t look back.
  2. The biggest limitation for me right now is battery life on the phone. I carry an old phone with me as my back up, and switch to that when my battery gets to below 10%. On shoots that last more than half a day it’s a problem. I look forward to better battery life, or faster recharging. I would also like it to be easier to connect mics to phones. The process still involves some tinkering and third party apps for me. I could do without this when I’m dealing with other things on a shoot.
  3. In terms of news, mobile filming has taken longer to take off than I imagined back in 2015. But tech revolutions are often slower than you anticipate (think electric cars). I see more and more young journalists coming through now who are used to filming their own content on their phones for platforms like Snapchat or TikTok. They want to continue as creative, self-sufficient storytellers. So the old narrative that a reporter needs to have a second, technical person to help them (whether a cameraperson or a radio producer), a Sancho Panza to their Don Quixote, will likely fade over time.

Neal AugensteinReporter WTOP-FM and wtop.com Washington DC

  1. Initially the goal was to reduce the amount of lugging of radio gear — laptop, digital recorder, camera, microphone. In 2010, when a multitrack audio-editing app became available on iPhone, that was the starting point. In years past, the ability to produce all different platforms from a single device made it possible.
  2. The biggest limitation is that few apps are specifically designed for reporters, who want both a live stream and the ability to re-purpose afterward. But that limitation also makes it fun — contemplating “how can this new app designed for general use be helpful in my job as a reporter?”
  3. With changing social media based apps sprouting up, the different ways to communicate with the audience will continue to develop. Being a ‘real person’ with the audience is both a great chance to learn what they’re thinking, but sharing a bit of yourself increases that connection. While so much of social media (and society) is divisive these days, I think the ability to demonstrate a commitment to ethical journalism and civility will become even more important than it is today.

Judie RussellFounder and Video Coach at The Vidacademy

  1. I was working on The Young Offenders Movie, in the summer of 2015, as the Behind the Scenes producer. I was shooting on a Canon DSLR but I couldn’t always carry my gear to remote filming locations as there was already so much equipment to bring for the main crew. So I started filming with my iPhone 6, never thinking that the footage would be usable next to the DSLR footage. But when it came time to edit, I was surprised and excited about the iPhone quality. Soon after that, I sold my DSLR, Canon C300, XLR mics and dove headfirst into mobile filming. And I’ve never looked back since.
  2. I’m looking forward to a future where I can add all of my clips to the timeline and allow AI to create the first draft so I can then add the final touches in my style. Editing can be slow and sometimes inefficient, but we are already seeing AI make smarter editing decisions for us. Apps like Quik edit to the beat of the music, and software like Adobe Rush automatically reduces the volume of music when it recognises voices on the timeline.
  3. In the future, I can see three things happening: (1) Everyone will become video editors – much like the adoption of PowerPoint and slide design in the late 90s. (2) Edits will be automated – Not only will an editor’s life become easier but when AI is mixed with Creative Commons footage, attribution free music and computer-generated voiceovers, this should lead to far more fully automated videos. (3) More organised and affordable cloud storage – improved video tagging and metadata along with more affordable cloud storage should allow us to recall clips and memories in a really simple and quick way. All of this should mean that most people will primarily use mobile video to communicate their stories and messages to the world.

Darko FlajpanVJ, MoJo and Broadcast Trainer at Croatian Radiotelevision

  1. In 2010 Glen Mulcahy brought an iPhone 4 to our VJ workshop. In 2011 we started MoJo Training for Circom Regional and TV Broadcasters.
  2. iOS is great, but also expensive for lots of newbies. On most Android devices there is no 25 and 50 fps settings. Also, editing apps on Android devices are not as good as on iOS. That’s why I am so happy that LumaFusion will be available for Android. Great thing for MoJo work and training!
  3. I hope there will be more creators with great ideas and approaches in storytelling, not just consumers in a metaverse.

Kai RüsbergRadio and TV Journalist & Trainer

  1. It was about 10 years ago. I was filming a lot for my public broadcaster in Germany as a VJ since 1997, but the whole publishing process was totally traditional, and ineffective. Not closely connected to the viewers, and it took a long time to publish. As my bosses didn’t understand the advantages of mojo and only wanted to do it the way we always did, I started on my own. Changing the camera, I didn’t want to produce the traditional way either. As I thought, just replacing the camera and PC with the smartphone but doing the same lengthy process of gathering footage and editing and voice over won’t do the trick. Therefore, I adapted the skills of a live reporter: do just one shot and film, show, interview, report, feel, expose, narrate everything just at once without editing. So I developed the #Oneshot reportage for local news.
  2. The biggest limitation for the #Oneshot technique was to get good sound for interviews, as well as the reporters’ narration while moving through the terrain of the scene. Therefore I always used a classical shotgun microphone with a self soldered cable XLR (female) to 3.5 TRRS plug. With the new wireless microphone solutions from Saramonic and Rode with two lavaliers this gets even easier. But in the future we will run into the problem that smartphones might get rid of all external physical ports, even USB-C.
  3. I started changing the device: Instead of using the smartphone, case, gimbal or frame with external light I’m now often using a dedicated camera. The DJI Pocket 2 is the tool for me. It delivers all in one device, even a wireless mini-microphone receiver, a gimbal and native 50p fps. It has the same elements and sensors like a smartphone, but all is built-in. Some hardcore #Mojos might say: that’s not mojo, because it’s not all produced inside the smartphone: I don’t bother. If you need the smartphone feeling, there is a dongle to connect the DJI to it and use it as the big screen or to transfer the footage.

Matias AmigoMedia Producer / Mojo Trainer

  1. In 2014 I was in Tanzania working in an orphanage. During my time there I had many incredible experiences, and on more than one occasion I thought “I wish I had a camera with me” without knowing that everything I needed was in my pocket. When I returned from my trip, I started looking for information on the web about how to record with a phone and there I discovered the Mojocon. Without hesitation I traveled to its last edition. It was love at first sight, working with compact equipment, achieving broadcasting quality and allowing new ways of telling stories (because of its size and the amount of complementary accessories that exist) marked a before and after in my life.
  2. Without a doubt I think there are some aspects to be solved, a lot has to do with the phone’s camera. It would be ideal if it had much more versatility and improved quality when it comes to exposure. To be able to work better in low and extremely bright light and that video quality is not lost in the workflow. On the other hand, I think the zoom is something that is being worked on a lot and we have to wait a bit longer for its massive implementation. Finally, the battery always ends up being insufficient; more battery life will allow us to work more comfortably.
  3. First we have to accept that many people generate content with the phone, maybe not in the way we would like, but with excellent results. And secondly, I think the biggest challenge ahead of us is to focus, not so much on accessories and phones, but on our creativity, which will end up making the difference, because the world doesn’t care what phone we use to generate content, it´s all about the idea.

Rob LaytonSenior Teaching Fellow (Journalism) and PhD Scholar Bond University, Gold Coast

  1. It was while on family holiday in the Scottish Highlands that I realized the panos my wife shot with her iPhone 5s were better than the wide angles I was shooting with a modest DSLR. I converted to iPhone photography from that moment. This also enabled me to enter ocean photography/videography, as housings for smartphones are substantially cheaper than those needed for big cameras.
  2. Sensor size and shallow depth of field in video have long been the largest impediments to mobile photography/videography. But we are seeing very encouraging advances with both so I’m optimistic for the future. It’s like a space race of sorts, with smartphone manufacturers trying to outdo each other to gain mobile camera market share.
  3. Ten years is a bit far ahead but I have no doubt computation is the future of imaging. Smartphone innovation leads the way, with incredibly fast and powerful processors and incredibly clever technology.

Björn StaschenFounder NDR NextNewsLab, Mojo-Coach, Author

  1. In 2013 or 2014 I was reporting from a yearly protest march, normally we wouldn’t cover it as long as it didn’t escalate into violence. So I had no camera crew. I talked to the online newsroom who were happy to use smartphone video from an iPhone 5.
  2. For me it’s still the wide angle – I find close-up shots, filmed from a distance, quite helpful in some stories. Sadly, even the zoom lenses don´t always help.
  3. I wonder if AI will help to improve picture quality with regard to partial lighting, white balance etc.

Martin NutbeemSenior Digital Education Developer, University of Bristol

  1. I’ve been creating media for education for about 15 years. The focus has always been getting content created rapidly for use with students, while quality needed to be “good enough”. About seven years ago I dabbled with using a smartphone instead of larger video cameras and realised it was a realistic approach for most filming requests. At the same time I also started getting students to film each other with Flip cams, for example building students filming each other’s brick work and then peer reviewing.  I now regularly train and support staff and students to use smartphones to create media as an alternative to traditional written assignments.
  2. Editing on a 6 inch screen is obviously bonkers if you want to create more complex media. I think the affordances that foldable devices bring will give a level of polish that is tricky with the average phone screen, currently. Phone formation for recording, then unfolding to edit feels like a great step forward. I hope the trend develops enough to become actually affordable. 
  3. When we first started running “Students as Creators” assessment projects, very few of the students would have used mobile devices to create content. Five years later and typically around a third of them have experience of using their mobiles to create as well as consume content. Openness to making media and confidence to have a go is increasing each year. When I’m asked to advise on filming now it’s a lot more common for people to say they want to use their phone. I anticipate that as devices continue to improve and the lines between smartphones and wearables blur, we’ll see mobile content production move from being considered a convenient novelty to serious option for many.

Matthias SüßenJournalist/Blogger/Trainer

  1. My first smartphone was the iPhone 4 and I was immediately excited about the possibilities. I quickly realized that it would revolutionize media production. The whole process from recording to video editing to publishing was suddenly possible anywhere on a small device. Then there were the emerging social media networks, which for the first time made it possible for anyone anywhere in the world to report on their situation. Completely independent of the large media corporations.
  2. The lack of the ability to install custom fonts is, for me, currently the biggest problem in the Android and iPhone worlds. In addition, there are well-known problems like fragmentation (Android) and lack of memory (iPhone). It remains to be seen whether new operating systems will further fragment the smartphone market in the future.
  3. The real and virtual worlds will increasingly merge. The production of AR and VR are becoming increasingly important and there are already a large number of augmented reality apps in the app stores of iOS and Android. Social networks are increasingly being extended to include virtual worlds. Media creators should therefore think about how they can use this and how information transfer in virtual worlds works.

Terri MorganCo-Founder and Lead Designer LumaTouch

  1. When we were developing our first iPad app, we got a HUGE amount of feedback from customers that they definitely would use our app on iPhone. Once we looked at how the smartphone can streamline the Capture/Edit/Deliver workflow, it was a revelation. To not develop for the smartphone would have felt like a failure.
  2. For smartphones, the biggest limitation that we see on iOS is that they do not have a USB-C port. With a USB-C port, people could carry a small drive to record to, and edit directly off of, or plug into an iPad for editing, without having to transfer media or rely on a 5G or WiFi connection.
  3. Developers are pushing the boundaries and finding that touch devices are extremely capable of complex workflows. In addition, as AI improves, more technical tasks, like perfect keying, titling from existing metadata, and correcting audio and video, can be delegated to the machine, leaving more time to focus on creativity and the actual meaning of the content.

Nick GarnettNorth of England Reporter BBC News

  1. After minidiscs, I was in the search for smaller computer based systems. Laptop recording and editing was good – but not portable. In 2002 we had Luci Live on laptops to allow us to broadcast live radio out and about. But it was only with the advent of the iPhone that it was ported to iOS. It wasn’t long before Poddio was released allowing multitrack audio editing.  In 2011 Voddio was releaed allowing video editing. I was now using a phone to record audio, edit audio, mix multiple tracks, send audio, go live on radio, shoot and edit video and send it in to the BBC. By 2013 I was using it to go live on TV as well.
  2. There aren’t any limitations in an audio sense. I can’t see how it can be improved or bettered. The biggest changes I can see coming out of phones are further improvements to the camera. The cinematic mode on the iPhone 13 is the biggest game changer for about 10 years. Next year it will get custom frame rates and 4K technology. And then we can say goodbye to ENG cameras and DSLRs (for some jobs)! I do worry about one future development – phones without sockets and a push towards Bluetooth and MagSafe.
  3. As I’ve said before Mojo is dead. Every journalist is now mobile and uses mobiles. Perhaps not to the extent we do but they will grow as developers harness the phone’s power and create ever more simple interfaces. We should focus on the way we tell stories rather than the devices we record and edit them on.

Bernhard LillMultimedia Journalist and Trainer

  1. It was in May 2014 and I was attending a video class. We were supposed to shoot a short news report and I had my old iPhone 4S with me. And I thought: what the heck, I’ll just try to film it with this smartphone and an external mic. And it worked. Since then I’ve produced most of my digital content with Android devices or iPhones.
  2. Actually, I don’t miss that much in smartphones. Okay, maybe a bigger sensor for better low light videography. But, altogether, it’s less the tech that makes a good movie than the creativity of the producer. I mean, Sean Baker shot “Tangerine” with three iPhones 5s.
  3. Mobile Reporting is becoming more and more mainstream already, even among public broadcasters in Germany. Apart from that, I’m pretty bad at foreseeing the future of tech. In 1991, I predicted that e-mails would never become a tool of mass communication. Well …

Philip BromwellDigital Native Content Editor at RTÉ News

  1. I became intrigued about the idea of using my then iPhone 4S (if I remember correctly!) in 2013. I filmed my first news story for RTÉ tv news in the autumn of that year. I was curious about the phone’s camera capabilities and wanted to see what was possible. The resultant story worked out really well and was broadcast on our main bulletin. No one batted an eyelid over how it was created – it sat comfortably alongside stories produced in the conventional manner. Since then, I have steadily increased my use of mobile devices for content creation – to the point that now I lead a team that produces original content for all platforms (tv, radio, online and social media) entirely on mobile devices.
  2. With every generation of phone, the device’s “limitations” become less and less, well, “limiting”. My iPhone 12 Pro Max is a much more powerful tool than my long-forgotten 4S. That said, filming with a smartphone still requires you to accept that you can’t really zoom like you would with a traditional ENG camera. However, that’s part of the fun of filming with a phone – it brings lots of creative possibilities in how you approach telling a story. Importantly, I still maintain that it doesn’t matter how good your phone is if you can’t tell a good story…
  3. I believe technology and audience behaviour shape today’s media landscape. They will also shape what happens over the next decade. As a piece of tech, phones are constantly evolving… so that suggests they will only become increasingly important as media production tools (particularly outside of mainstream media). How mainstream media emerges from the pandemic isn’t yet clear – but all the signs are that low cost, creative, nimble production methods (such as phones) could/should have a big role to play.

Mark EganMobile Video Specialist

  1. I first used smartphones for real filming when my normal camera failed and I used an early iPhone instead. I was amazed how good it looked. I soon realised smartphones were making shooting video more accessible … but also the apps created a really smooth workflow. The latest phones let you shoot really high quality footage, edit it, share it and go live. In addition, as the audience now consumes more on mobile, it makes sense to create on mobile so you can be native to the platforms.
  2. Lack of optical zoom has historically been a limitation, but that is going away. I would like better battery life, but my real concern is audio. You can record good audio, but you do not have the multiple track options you have on a traditional camera. The lightning or usb connector can also be a bit flimsy. If manufacturers take away the charging port and replace it with wireless charging, we may even lose the ability to plug in an external mic. The ability I’m looking forward to most is complete “over capture”. You shoot and it gathers so much information you have full control of the focus, exposure etc in post production. We have a bit of that now, but it will get much better.
  3. The mobile journalism movement is really all about allowing anyone to tell a story, anywhere, anyhow and on any platform. It has already become quite mainstream and as apps, accessories and hardware get better it will be unthinkable to be in the media industry without being able to create top quality content with your phone. I think we will also see more artificial intelligence being used, more cloud services making the type of device less relevant and some new media storytelling techniques emerging.

Silas BangEditor of Video, Jyske Fynske Medier

  1. With the launch of the iPhone 6S in 2015 I felt the quality was now good enough and I was tired of editing in Premiere Pro as my PC crashed again and again. When editing in iMovie I had no problems.
  2. It is a limitation that we need a lot of adapters to plug in a couple of mics. I would love to be able to have more wireless mics working via bluetooth or something better.
  3. I do not believe that the big cameras will or should disappear but more and more people in both media and especially communication will realize that it is that much easier to get into visual communication with a phone. And for most the quality will be more than enough.

Mike ButteryBBC England MoJo Project Lead

  1. I have been using mobile phones to capture video since VGA camera phones in the late 90’s. I find them the perfect device that is always with you to document the world around you. Since then I have been a professional cameraman working with film, tape and solid state cameras creating content for TV News, Entertainment, Music, Sport. As a BBC News Operations Manager During COVID19 I wanted to help inform our audiences of what was happening in the world by training the BBC England News teams how to safely tell stories on a low key, light weight device, a device all our audiences are familiar with. I issued 50x MoJo kits to News content makers across England and trained them alongside the BBC Academy. We now see stories told to our audiences across all our output using MoJo.
  2. Overall the mobile device with all the ancillary kit is very robust and versatile, however the biggest limitation is the ability to zoom. It’s great that most mobile phones offer multiple lenses to choose frame size prior to filming and of course the ability to shoot in 4K allows the filmmaker to crop in after during post production. I am really looking forward to being able to zoom in and zoom out during capture just like a traditional camcorder. For documentary and gorilla style filmmaking this would give you so much flexibility in capturing the moments without using digital zoom. With this having at least one grade of built in ND filter would be another great step to the professional standard.
  3. With technology advancing at a rapid rate, I would expect in the next decade to see larger sensor sizes in most mobile devices which will improve depth of field and low light. Also I think we will see zoom lenses and other professional add-ons. Technology will bring more people to MoJo as it will make it easier to operate and this will increase the market to consumer level which in turn will rapidly reduce the gap between professional kit and kit aimed at consumers.

Umashankar SinghSenior Editor – Foreign & Political Affairs NDTV India

  1. I started using a smartphone as a media production tool a long time back but it was occasional. I can remember using ‘a phone with a camera’ (it was not a smartphone) in 2006 to record a few clips for my report. This continued over the years. Actually I used it only when a big camera was not with me, to record small clips to be used for my TV reports. But it was in 2016 that I adopted the smartphone as my main media production tool. Reason was mixed. My organisation NDTV India decided to go mobile journalism (mojo) way. I willingly and happily adopted it as it gives me ultimate freedom as a TV journalist. A smartphone always remains in pocket so one doesn’t need to wait for the big camera/camera person to reach the location where news is happening. You can start on your own, especially when in a situation where something good or bad happening was not planned or foreseen. And even in a planned shoot, you can do it on your own, without others’ help. You can do live reporting through a phone only by using app like LiveU or others. Last but not the least reason is financial too. MoJo has reduced the production & travel cost so I can do the production single handedly.
  2. Mobile phone cameras have their own limitations. Camera quality is improving but it just can’t match the technicalities of big cameras (like zoom in etc). Though there are apps and small devices to enhance the technicalities but that’s not enough. Nowadays we are working with smartphones with camera. As a mobile journalist sometimes I think I should have a camera with a smartphone. That means a camera which has all the features of smartphones. What I mean to say is that as of now smartphones manufacturers put ‘camera into their smartphones’. It may sound like a weird idea but I am hopeful that any camera manufacturer companies will soon put a ‘mobile phone into a camera’ so that mobile journalists like me feel more empowered as media producer.
  3. As I said, smartphones are like a match box which can light the fire within seconds. You don’t need to rub stones for hours to light the fire. This is in production sense. So I think it will go a long way. Not just because of the fact that it’s easier to carry or cheaper in cost than a big camera unit, but also that smartphones have greater reach. At every nook & corner of the world, people have smartphones in their hands. They are generating video clips of events, natural disasters, political chaos, inhuman behaviours, terrorists’ attack & acts… anything for that matter happening in front of their eyes for the world to view. This phenomenon will be strengthend even more in the coming years.

Matthew FeinbergCEO Alight Creative

  1. It was 2010. Apple had just released released iMovie for iPhone, and the company I worked for decided to try building something similar for Android. My original job was building developer toolkits for streaming mobile video, but I ended up on a two-person team doing research into what would eventually become KineMaster. From the beginning there was always resistance. Even with Apple leading the way, people laughed at the idea of editing video on a phone. People still laugh, but the reality is that tens millions of people now produce and edit video content on smartphones every day.
  2. There are so many amazing new ways to capture video on a smartphone, from 4K and HDR to high frame rates and ProRes. While these open up many new possibilities for creative freedom in post, they also come with a big drawback: Huge file sizes. Unfortunately, storage management is a bit of a mess. Increasingly strict app sandboxing, unavoidable for security, means apps need to keep their own copies of files they work with. Unfortunately, storage management is a bit of a mess. Increasingly strict app sandboxing, unavoidable for security, means apps need to keep their own copies of files they work with. Under-the-hood optimizations (to avoid storing duplicate data) deep within the operating system help to mitigate the wasted storage these copies would otherwise incur, but also mean that the operating system’s visual representation of used and free space can be wildly inaccurate. Finding a clear and simple way to visualize used space without sacrificing accuracy will be a big challenge for operating system makers, but one that I think will become increasingly important to address.
  3. The line between traditional workflows and mobile workflows will continue to blur. I’ve moved on from KineMaster to found Alight Creative, where we make desktop-quality professional motion graphics and video compositing tools for mobile, such as Alight Motion. But we (and companies like us) are now naturally beginning to fold desktop support into our roadmaps. With new MacBooks supporting mobile apps, and Windows adding support for Android apps, the tools that we and other companies are building for mobile will naturally evolve into fully cross-platform tools. As apps like these become more and more full-featured, they will become serious alternatives to traditional desktop-only apps. The decision of whether to use a desktop, laptop, tablet, or phone will, for most people, become a question of form factor and budget, rather than a question of functionality. This will lead to new levels of freedom for content creators in their choice of tools, and an easier learning curve for new creators entering the industry. The future of content production is indeed bright!

Tim BinghamPhotographer

  1. Looking back I firstly discovered the video capability around 9 years ago when I started interviewing  people for my job , In regards to photography it was probably about  6 years ago when I started using the smartphone for photography. I have always seen the smartphone as part of my toolkit.
  2. I am going to write this response from the perspective of an Android user. Firstly the progress that has been made over the past few years has been incredible, particularly with AI optimisation however that can be a disadvantage in certain circumstances. Having the DNG or raw capability on the phones has really enabled me to capture photos that can be edited on the phone and printed in A0 and no loss of resolution.  The biggest limitation for me at the moment is not to have a variable aperture when using one of the telephoto lenses. This for me would be a further game changer in the use of smartphones.
  3. I think we will see big movements in the next few years, however I am very sceptical if AI takes over completely and doesn’t allow the user to be creative. I would definitely envisage a manufacturer bringing out a smartphone with zoom capability and a variable aperture, let’s face it we are nearly there. I envisage larger sensors particularly with the new developments in single lenses using nano structures which can then offer true optical zoom. In regards to the whole movement I am watching to see where it all leads as it’s interesting speaking to other film makers and photographers  and many of them are saying that the prices for smartphones are expensive compared to mirrorless and DSLR cameras and these still have the added advantage of all the various lenses. I can see a big future in the whole area of journalism as producing and editing high quality content on the smartphone is already happening.

Marcel AnderwertMobile Reporter SRF News

  1. In late 2014 I filmed my first Mojo report for Swiss TV’s main evening news show «SRF Tagesschau», using an iPhone 6 and the amazing FiLMiC Pro app. Although the interview did not look particularly great back then, nobody in the newsroom noticed the report was shot with a phone.
  2. It would be great if it was possible to easily attach a good telephoto lens to a smartphone. Moreover, I am looking forward to sensors and lenses getting better, in order to have more possibilities for working with depth of field.
  3. In the last seven years, I have always been ready to go back to filming with a normal camera. But it never happened. I still like to work with my light and low-key smartphone equipment very much. Especially for more personal stories.

Marc Blank-SettleSmartphone Journalism Trainer, BBC Academy

  1. I’d love to say I discovered or chose it, but bigger brains at the BBC got there before me. We had developed an application on Symbian to send material in from Nokia devices but that was changed to iPhones and I was asked to deliver training on how to use it.
  2. The lack of a decent zoom on most smartphones remains a challenge. Video journalism often involves getting close to the action visually and with a decent zoom you can stay at a safe distance. Most smartphones can’t zoom much before the quality gets worse so people either have to get closer, which could be dangerous, or they can’t get close-ups, which limits the story that can be told.
  3. Better cameras with a periscope zoom lens, hopefully. The big question will be how Apple’s AR glasses change things eg can they be used to gather and send content? If not, the smartphone will remain dominant for the rest of the decade and beyond.

Laurent ClauseJournalist/MoJo-Trainer/Blogger

  1. Editor-in-chief of a computer magazine devoted to the Apple world, I was invited in 2007 to the launch of the iPhone in San Francisco. I was already pushing my team to produce video content for the web at the time. When I saw the 1st iPhone, even if it only took still pictures, I immediately understood that my job as a journalist would change with this new device as soon as it would shoot videos. I really started producing video content with the iPhone from 2010. First by editing it on the computer, then on the iPhone directly from 2012.
  2. The main limitation of the smartphone in my eyes remains the lack of real zoom. I am asked more and more often to film on a smartphone because people are less afraid of it. But when you need to make people around unrecognizable, a super telephoto lens is best to have a shallower depth of field / more bokeh and avoid blurring faces during editing. I am also waiting for the adoption of the Bluetooth protocol by microphone manufacturers. For two years I have had a prototype Bluetooth transmitter for XLR microphones which is just great and I do not understand why the manufacturers do not offer real Bluetooth microphones. That would allow us to record good sound – even if it’s not as good as professional sound – without connecting a receiver to the smartphone.
  3. I hope to see optical zoom and Bluetooth transmitters coming soon. I also see live video developing rapidly on smartphones with multicam and multiplatform control room applications available to everyone. 

Martin HellerMultimedia Journalist and Trainer

  1. I started in 2012. I was working with an iPhone 4s back then. Times have changed, so much improvement!
  2. For some journalists, it’s totally unprofessional to use smartphones for interviews. I think that’s wrong and the biggest and most dangerous limitation in your head because it limits your creativity. Of course, there will be improvements technically in hardware and software. But I am already very happy and thankful to have such great gear to tell stories easily, fast and experience the surprise and enthusiasm in the eyes and smiles of the participants of my workshops.
  3. Maybe the „movement“ will be eaten up by its own success. As mobile journalism has become the „new normal“ in more and more situations, it’s not revolutionary anymore. I really love the community of mobile journalists and see what’s possible today with all the ideas growing out of a basic gear and knowledge. There’s so much undiscovered potential in smartphones – and humans!

Nicki Fitz-GeraldArtist, Illustrator and Teacher

  1. I was already using my phone as a creative tool to take photographs and create digital art. Creating videos was a natural next step but it really took off for me when I was invited to speak about my mobile photography at the first MojoCon, Glen Mulcahy’s brilliant gathering of mobile creators (journalists, photographers, communications folk, bloggers) in Dublin 2015. This event and subsequent annual Mojocons and Mojofests showcased a myriad of mobile creative possibilities and gave me the confidence to start making my own short films for the communications department I worked for at the time.
  2. For photography, I’d like a fabulous built in telephoto lens on my iPhone. I’d love to flatten images seen from a distance in the way that you can with a DLSR camera. Always better battery. Better resolution on the front-facing camera so I can do pieces to camera and see myself while filming without having to compromise on quality.
  3. I am a bit out of this space as I’ve been focused on my digital art but there does seem to be a lot of excitement around everything happening virtually, and after the pandemic (is it over yet?) and numerous lockdowns, this seems to have accelerated activity. I heard that we will be able to visit people and feel like we are actually in the same room so maybe the future is creating more content for virtual and augmented reality environments. Personally, I’d prefer to pop over to my mum’s and have a Sunday dinner…I just can’t seem to make my roast potatoes as good as hers.

Judith SteinerVideo Production Coach

  1. I used to work as a video journalist. I loved doing video productions, but the longer I worked, the more I struggled with the pressure of news and the fact that I could only scratch the surface of the topics. So I started thinking about what I would really enjoy doing. That’s how I came up with the idea of a talk show that I host and film at the same time. That was 10 years ago, smartphones were still something quite new in our lives. I was fascinated by the technology. So I bought three iPhones and filmed my talk shows with them. It became a field of experimentation for me: how do I get a smartphone onto a tripod, how can I connect an external microphone. At that time I had to order tripods and microphone adapters (TRS to TRRS) in the USA. That’s how my story with the smartphone as a media production tool began.
  2. So far in my video courses I’ve said: “The biggest disadvantage for me when filming with the smartphone is the lack of depth of field.” The cinema mode on the iPhone 13 now makes it possible and it works amazingly well. Now it’s small things, but they improve again with every new smartphone: Light sensitivity, image stabiliser (especially in the selfiecam).
  3. I think the lenses will evolve. With the iPhone 13, we are at 3x zoom with the telephoto lens. More will probably be possible in the next few years. The resolution will increase to 8K. There will be even more automation in the editing apps, especially in speech recognition, for example, to generate subtitles quickly.

Chris CohenCTO Filmic Inc.

  1. The moment Apple unveiled the iPhone 4 with a high-quality BSI sensor, I realized that media creation was on a new trajectory that could not be thwarted.
  2. I’m looking forward to the advent of higher-precision depth mapping techniques on smartphones. This will enable new aesthetics and workflows that far exceed what we can presently imagine.
  3. In the coming decade, smartphones will have multiple ToF sensors, powerful TPUs capable of realtime whole-image transformation, camera modules with Quantum Dot substrates (and possibly non-bayer color filter arrays). The high end ‘prosumer’ camera market will finally collapse. In every pocket a device will reside that equals or surpasses the best camera systems available today. At that point, the technology itself will cease to be relevant. It will fade into the background. Invisible. Only skill and technique will matter.

Yusuf OmarCo-Founder Hashtag Our Stories/Wearable Journalist

  1. I first got into mobile journalism in about 2010. I wanted to be a foreign correspondent and there weren’t that many opportunities for somebody young and inexperienced like me so I started travelling around the world using mobile devices and small cameras to capture stories. 
  2. I think the biggest limitation of mobile journalism is still perception to some degree. You can create a cinema ready piece of content with a mobile device and you can interview the president of a large country, but the form factor and accessibility means that sometimes it’s still not taken very seriously until people see the results.
  3. I think the future of storytelling and our interaction with technology is augmented reality layering the internet onto the world and telling stories through eyes. Over the next 10 years we will transition from mobile phones to wearable computers on our faces.

Klaus MittmansgruberVideo/Content Reporter and MoJo Trainer OÖN

1. When I came across Glen Mulcahy’s Twitter account in the fall of 2017, a new horizon opened up for me: Independent, autonomous and high-quality production of video packages with my smartphone (Samsung S9), the „production studio in a pocket“. After some first successful trials it became clear to me: That’s it! All of a sudden I wasn’t „only“ a news editor but a cameraman and finally also a video editor. This combination – while being quite challenging – makes it possible to think a story through independently from start to finish and put it on air / online.

2. On the technical side, the biggest limitation/challenge for me is shutter speed which can often only be ideal when you have the right filter (lens) at hand. Particularly under difficult lighting conditions, this can be a problem. Furthermore, you are (still) limited by the zoom range.
Therefore, some events aren’t really suitable for „MoJo“ which you must be aware of and take into consideration when planning a shoot. Another (costly) challenge: big, data-heavy storage space. When you shoot a lot, you should at least have 256 GB space.

The actual work process of shooting video as an active reporter (for instance an interview) can be a tricky double-task: On the one hand you need to concentrate on conducting the interview and paying attention to the interviewee, on the other hand you also need to keep an eye on the image and the audio. But like Glen Mulcahy said: If working the MoJo way becomes a routine like riding a bike, you don’t have to worry about something falling by the wayside. 

I’m looking forward to new innovations like more powerful gimbals that can handle the attachment of accessories to the phone as it’s currently still difficult to work with external microphones and lenses. I’m also eager to see further development of the FilmicPro app and a way to have a completely wireless audio transmission from microphone to phone.

3. The work of TV and video journalists will be changing drastically. The required skills will blend more and more: Editorial work – camera – video editing: a one-man band – but with the severe danger of the journalistic aspect falling short in the process. Smartphones will get better year by year and with the 5G mass coverage, data transfers will be much easier and faster. The devices will grow into dependable live cameras.

Ivo BurumTV Producer, Lecturer, Mojo Trainer and Digital Consultant

  1. I had been running a TV Department for ABC TV and the Nine network in Australia where I had used DV cameras since the 90s to create formats to enable our audience to be part of our productions. In 2003, I moved to a Nokia phone with a VGA camera. But it wasn’t until the iPhone and the app store and edit apps that I began to see the potential. In 2010, I decided to take all that I had learned with DV production, two decades earlier, and embark on a PhD investigating smartphone production. I wasn’t all that interested in creating UGC, but complete user generated stories (UGS). I saw an opportunity to provide Indigenous Australians, living in remote communities, with an opportunity to create their own local voice. With funding from the Australian Govt and support from Apple and Vericorder, I developed NT Mojo a project that trained nine Indigenous people, living in remote outback Australia to shoot, edit and publish local stories, using the iPhone 3. Four of them got work as mojos and two won national film awards. We were onto something, so I spent the next bunch of years introducing mojo to more that 60 media companies and many community groups globally.
  2. Mojo’s biggest limitation is also potentially its biggest asset. Many people involved in mobile journalism have never made video stories so, even in 2021, there’s an under appreciation of the craft of visual storytelling. Occasionally this can lead to a new dynamic interpretation of the visual digital form. While this can be a plus, what’s required is a balance between a prevailing techno determinist view and a much needed creative and editorial perspective—visual literacy. The second  limitation is not being able to ‘easily and quickly’ record and manipulate split audio tracks onto my recorded video.
  3. Recent multi lens smartphones with excellent steady cam functionality have enabled me to replace cradles with clamps resulting in much lighter and more affordable kit. This means much safer use for mojos working in conflict and other zones. I see the eco sphere growing, even more than the DV space did in the 90s and mobile continuing to grow to become even more that new digital pen that we never leave home without.  Finally, I see mobile as the conduit between the community, education and professional storytelling spaces.

Richard LackeyMarketing Manager Fujifilm Middle East & Africa, Electronic Imaging & Optical Devices

  1. In about 2017 I started shooting video with the original iPhoneSE and FiLMiC Pro. My creative and technical background is with cinema cameras and cinema color and post production workflows. I was curious to explore if polished high end video, or even a “film look” could be achieved with video captured using a smartphone, by taking the same technical approach as with a dedicated professional camera, and post production workflow. I wanted to see how much of a role the source video quality played in producing the end result.
  2. Almost all of the major device limitations that I faced back in 2017 have been resolved by now in the current generation of smartphones. I don’t have experience with Android devices but with the iPhone, the main limitation is still related to dynamic tone mapping, although it is better than it used to be.
  3. I feel that achieving the kind of high end polished results that I started out pursuing is now easier than ever, and the knowledge required, and post production tools are more accessible than ever before. There is definitely more interest in using smartphones for video capture, and more people are pushing the limits of what is possible.

SJ van BredaScreenwriter/Director/Editor

  1. I was fresh out of film school, had to move back to my home country, and had no contacts, no gear and too many ideas. I saw the Nespresso Talents 2018 competition and decided to enter and just shoot on my phone, as the contest was for vertical films anyway. I made the top 10 that year with Nespresso and I realised that I could make all the ideas in my head with what I already had. It was a great moment of realisation for me, because I think film school brainwashes you a bit that you always need all the gear and the camera etc. But no, you don’t need it, just make the film. 
  2. The performance of smartphones in low light, and their abilities with depth of field. The soft focus of a smartphone is very distinct and gives it away a bit. Once smartphones can handle low light conditions and lens options can mimic cine cameras, all bets are off. They’re getting better year on year, so we’re not far off. 
  3. This phenomenon has been observed before. At one point, showing up to set with a DSLR was considered laughable; now entire features are shot on them and play on global OTT services like Netflix, and regularly screen in huge cinemas. It will be the same for smartphones, and I’m honestly glad I joined the trend early-ish. Because at the end of the day, it doesn’t really matter how you shot your film. What matters is story and character and creating cinema. Any democratisation of that through more widely accessible tools will allow voices to be heard who would never have been given the chance under previous systems.

Cielo de la PazDesigner/Photographer/Filmmaker

  1. About 4 or 5 years ago when I had to publish videos to an online learning platform – I realized it was good enough to be able to produce quality media.
  2. Depth of field is still pretty limited although it is getting better. I’m really looking forward to how computational photography can really increase production quality.
  3. More and more professionals will be incorporating the smartphone into their toolkit and it won’t be so revolutionary to be using a smartphone to film anymore.

Wytse VellingaMobile Storytelling Expert

  1. The smartphone came into my workflow after purchasing the Nokia Lumia 1020 phone. A smartphone with a very good camera but a very limited number of apps. So after experimenting with it a bit I met up with Glen Mulcahy and Karol Cioma and they tought me how to use an iPhone for journalism. After that I never looked back and recorded, edited and published for Radio, TV and Online from my phone.
  2. There are hardly any limititations left that get in the way of telling a story. There are a couple of small things that could still improve a little. For example the low-light abilities of a smartphone camera and the ability to zoom or change the depth of field. A phone like the new Sony Xperia Pro-I comes really close to being a perfect Mojo phone.
  3. In 5 to 10 years there will be no difference between shooting on a phone, a mirrorless camera or even a wearable device like smartglases. Where the phone will always have an edge is that it is a multifunctional device: Being able to shoot, edit and publish from one device makes it a winner.

Yegon EmmanuelCo-Founder and Communications Director at Mobile Journalism Africa

  1. We started sometime in 2017. We were still in university then and we had first interacted with the MoJo concept. After a workshop on the same, we decided to mainstream the idea in Kenya, and that’s why we started our journey officially in January of 2018.
  2. Initially, the public didn’t appreciate #MoJo as a way of telling stories but that has drastically changed now. People have finally seen the potential and the possibilities that come with being able to effectively and efficiently tell stories using their smartphones. We see this as a good sign as it means more people are getting into the space, and therefore more stories will be told. This way, we’ll get to achieve our goal of retelling Africa’s narrative faster, one story at a time for we believe #OurStoriesAreBestToldByUs.
  3. Here in the continent, this is going to expand tremendously in the next decade. I foresee a connected continent, with a huge network of autonomous storytellers reporting from different locations in real time. The extensive use of wearables, 360° cameras, AR & VR as well as gamification is on the horizon but #MoJo will dominate for many!

Sara HteitVideo Journalist, Mobile Journalism & Digital Storytelling Trainer

1. I am a video journalist so I use a camera for telling stories. In 2017, I was assigned to train youth and refugee communities in Lebanon to produce stories related to them and reflect some positivity to their community.  Because big cameras are expensive and we can not afford to buy them for everyone, we decided to use a tool that everyone has, which is their smartphone. The first training project by DW Akademie was in a Syrian refugees camp for three months that covered everything related to video production using the smartphone, from basic photography to advanced video filming and storytelling. So briefly, my relationship with mojo started as a trainer then I started to use it in my report. During that year, to film stories, I shifted from using my sony px70 to my iPhone 10 back in the day.

For training, the smartphone is not just an electronic device that contains apps and cameras but it’s a very powerful tool especially in developing countries. Which can give everyone the power of covering and reporting independently without relying on a big crew (cameraperson, editor, tv station) or any media outlets. Also it gave them the possibility to tell the story in a different way that is very close to them and also close to their community and their audience. 

As a video journalist, my smartphone is the only camera that I use when I report and film a story. During the Beirut Blast, I produced stories immediately in the field. I just carried my small bag that contained a small tripod, a torch and microphone. I was moving around very lightly, I was very close to people and I filmed more than a story a day.

If I want to summarize my experience with the smartphone from 4 years until now I would say that reporting a story doesn’t require a lot of equipment and money. For good storytelling, one should be close to people and with a mobile phone you can film whatever you want. We shouldn’t also forget that mobile phones broke the monopoly of the huge production media outlets and restored the balance of power in dictatorship countries and it was a tool for exercising freedom and telling what was going on anywhere and anytime.

2. As I work in developing countries, let’s say Lebanon, Tunisia, Morocco or Libya and some of these countries were in wars like Libya, or passing through a very big crisis like Lebanon. Our big limitation is not having a very professional application for filming and editing on mobile without paying money. As I know from my students it’s a very big challenge for online payment. Also people pay a lot for buying a good mobile with good camera features. Usually, after paying for an expensive mobile phone, some people can’t afford to pay for applications. Furthermore, countries that are facing an economic crisis don’t have the option of buying applications online. And for the free apps already available we face a lot of limitations related to advanced audio editing or writing in Arabic. Let’s say what we are looking for is a free app that contains everything related to video production, an app where we can add graphics templates for the text, build or use attractive transitions, and edit the sound very properly.

3. I don’t think it’s a movement, let’s call it an evolution and as long as we are using our mobile to be connected to the world, mobile journalism will stay the best way to tell stories from all over the word.

Anna-Katharina SchubertCross-Media Journalist/Presenter/Lecturer

  1. Since 2017, I’ve been in the social media editorial team for “Business and Consumers” at the WDR, which is a member of the public-service broadcaster ARD in Germany. We quickly realised that it was worth implementing moving image content specifically for Facebook and later for Instagram. As a TV journalist in the team, I went straight out and shot the first mobile reporting videos with my Samsung S8. To this day, I still really enjoy shooting with small equipment for our social media account Kugelzwei, also because I discovered cross-media work in journalism for myself.
  2. Whether as a filmmaker or as a lecturer for mobile reporting, I have one big wish for the future: standardised (charging) connections! That would make a lot of things so much easier.
  3. Mobile reporting will continue to be successful. On the one hand, because we as journalists will (have to) increasingly work cross-medially and the smartphone is perfect for this, if you use it correctly. On the other hand, I also think that this market (apps, equipment, etc.) will continue to develop and produce innovative tools. Because companies have now also understood how to train their employees in this field. The demand for workshops on “shooting and editing videos with the smartphone” is huge.

Gibran AshrafMoJo Journalist and Journalism Trainer

  1. I chose the smartphone as a media production tool very early on in my journalistic career, circa 2009. Back then I was a beat reporter for a daily newspaper who was dependent on public transport to get around. To save time between covering assignments and filing to my desk, I realised that the capability to write out my entire report and email it to the desk instead of waiting to get back to my desk and then file (especially if I had back to back assignments to cover), was extremely helpful. Back then, I used a Nokia Communicator which provided me with a full sized keyboard to type out my assignment and email it back to my desk. From then on, there was no looking back and as smartphones evolved and became better, I started to use them more and more for reporting, especially in the 2013 Pakistani elections and several other reporting assignments.
  2. I think looking forward, what I would really like to see in the future is a greater mainstream adoption of Augmented and Virtual Reality technologies at both the creator and consumption levels. We can also expect improvements in video capture and editing capabilities with greater software advancements allowing access to several features currently found only on desktop computers. 
  3. Over the next decade you can expect the emergence of highly specialised and incredibly powerful mobile devices which will become the central device in all forms of computer based work spaces and which will completely replace the current ecosystem of multiple devices such as laptops or workstations. We will also see greater use of smartphones as the primary devices on which content is created, whether by individuals, amateurs or groups and organizations for more professional use, especially by the younger generation who are natively adept at the emerging technologies. We will also see greater adoption of AR and VR as media formats.

Matthias SdunDocumentary Filmmaker, Coach for Video and Visual Storytelling

  1. I had to rely on my smartphone for the first time in 2009. My camera broke during a documentary shoot I did in Saudi Arabia. And the only working device left was my old Nokia I had at that time. I started teaching classes with smartphones around 2011. I had been working as a video journalist at that time and had been teaching classes for about five years. At that point more and more people from print and online tried to implement video. Bigger cameras were not affordable for most of them. But smartphone cameras were everywhere now.
  2. Sound recording in video is still a big issue. Being able to record with several microphones on multiple audio tracks included in the video would be great.
  3. We see image quality improvement through Artificial Intelligence and smartphone video footage with shallow depth of field coming up right now. This is going to improve and the videos will look much more cinematic in the near future. Augmented Reality is huge. Being able to put photo realistic 3D Modells in your video is fun and I think there is far more to come. Connecting the smartphone with far tinyer drones than we have today and to produce high res videos is another big thing to come, I bet.
#49 What’s new and useful in iOS 15? (by Marc Blank-Settle) — 24. October 2021

#49 What’s new and useful in iOS 15? (by Marc Blank-Settle)

Preface

So far, all the blog posts on smartfilming.blog were written by myself. I’m happy that for the very first time I’m now hosting a guest post here. The article is by Marc Blank-Settle who works for the BBC Academy as a smartphone trainer and is highly regarded as one of the top sources for everything “MoJo” (mobile journalism), particularly when it comes to iPhones and iOS. His yearly round-up of all the new features introduced with the latest version of Apple’s mobile opearting system iOS has become a go-to for journalists and content creators. iOS 15 just came out, so without further ado, I’ll leave you to Marc’s take on the new software for iPhones and don’t forget to follow him on Twitter! – Florian – smartfilming.blog

Introduction

Doesn’t time fly? It’s already a year since I made a video looking at what was then the latest version of iOS, the operating system on iPhones.
It’s also therefore a year since the equally traditional complaint of ‘preferential treatment’ to Apple over Android, the operating system on around 70% of smartphones globally.
However, it remains the case that iPhones and iOS remain the dominant device for mobile journalism.
It’s also the case that this review of iOS 15 will be far more relevant, far more quickly, to iPhone owners if the pattern of previous releases is repeated. iOS 14 came out on 16 September 2020; a week later it was running on more devices than were almost a year later running Android 11 which came out a week before.  

iOS 14 got onto millions of users’ iPhones within weeks of its release.
The latest version of Android is adopted much more slowly than iOS.

14 or 15?

In addition to new features and functions, iOS 15 also contains bug fixes and security updates to protect your device against malware, spyware and viruses. But in a radical departure, users can for the first time get all these fixes and updates without taking the new version of iOS.
Up until now, the only way to get the latest protection was to get the latest software version. But now, you can stay on iOS 14 and only take the security updates included in iOS 15 and not the new features.
To do this, go to ‘Settings-General-Software Update-Automatic Updates’ and turn off the options you see here for downloading and installing iOS updates. When Apple releases security patches for iOS 14, you’ll see them in the Software Update menu instead of the iOS 15 updates.
For some users, especially with older devices, this strategy might be worth considering. If you’re still on the 6s or the original SE and your battery depletes quickly, the extra strain of iOS 15 might not be worth it for you. 
Or maybe after reading this review, you might just want to keep everything as it is in terms of how your device works, but you understandably want to take the bug fixes.

Which devices can get iOS 15?

If your iPhone is a 6s, original SE or newer, iOS 15 can be downloaded to it.
But not everything is coming to every iPhone which can download it, as rather than frustrating users by giving their phone some features it will struggle with, Apple have chosen to simply not make them available to older devices.
Most people won’t be aware though that their 7 or 8plus is missing out on new goodies – although you will, after reading this review.
The cut off tends to be the iPhone X and older: if you have one of those devices, then there are about ten additions which you won’t get. Anything newer, and you’ll get everything although there are a few extra things reserved just for the iPhone 12 series of 2020. When this review gets to those features which are only for certain phones, I’ll flag that up.
Additionally, some things which Apple highlighted in their big reveal of iOS 15 in June 2021 have been postponed and won’t in fact be available until 15.1 or later; these too will be flagged up.
Finally, this review reflects to a degree how I personally use my iPhone. I’m not a great user of Reminders or Notes so I won’t be able to do justice to any changes made for that or any other aspects of iOS which I myself neglect.

Mainstream mojo

Usually, my review of the new features for mobile journalists of the forthcoming version of iOS goes BIG on video, audio and photos – the mainstays of mojo.
But not this year, at least not quite to the same degree.
I’m not saying there’s nothing of interest to mobile journalists, or I wouldn’t have spent hours researching, writing, and putting this all together. But there’s certainly not as much as its immediate predecessors, iOS 13 and 14. 
From my perspective, there’s nothing new for videos, photos and audio creation using Apple’s in-built apps with a huge “wow” factor. The key word here is “creation”: iOS 15 doesn’t immediately permit anything radical in terms of how content is gathered. But there are clues of what third-party developers may be able to do to benefit users. 

Video bokeh

The first big change for video in iOS 15 could go some way to addressing one long-standing complaint about footage recorded on an iPhone – that too much is in focus, unlike the material from a ‘proper’ broadcast camera used in news, documentaries, wildlife programmes and so on.
Known as ‘bokeh’ or, more prosaically, ‘blurry background’, it’s the visual effect whereby the main subject of a video, such as an interviewee, is fully in focus while the background behind him or her is not.
It gives depth to shots and a blurred background means the viewer can concentrate on what is being said rather than wondering where the interview is being filmed. On an iPhone, all the footage tends to be in focus unless the subject of a shot is very close to the lens.
Due to the lack of a big image sensor needed to produce ‘natural’ bokeh, smartphones rely on software to artificially create and simulate the blurred background effect. Apple introduced this to photos in the iPhone 7Plus of 2017 with ‘Portrait Mode’ but it’s taken a full four years of advances to get it working on video – even if they were beaten to the punch by third-party apps like Focos Live.

A photo of me taken on the standard wide lens of the iPhone 11 Pro with no blur.
This photo from the iPhone 11 Pro in Portrait Mode shows the blurred background generated. 

Facetime Portrait Mode

If you’re lucky enough to be able to afford a model from the iPhone 13 series, then you’ll have bokeh for video albeit at 30fps which suits the requirements for footage shot in North America but is not what’s needed for TV in the UK and much of the rest of the world.
But if you have an iPhone XS or newer, then iOS 15 does offer a Portrait Mode option on video on Facetime, as well as a few select apps which already offer a blurred background feature such as Instagram, Snapchat and Zoom.
Open the app you want to use and then Control Centre; a new ‘effects’ tile is visible and once pressed, you can toggle ‘Portrait’ on or off.
Or you can do it straight from within Facetime itself:

The icon in the top left can turn the blurred background on and off.

Bokeh video beyond FaceTime?

This could all get really interesting if developers of professional video filming apps like FilmicPro or MoviePro are able to bring this functionality into their apps, giving bokeh to iPhones at the preferred 25fps or even 50fps.
But if it can only be done with the 13 series and not these older models, then journalists unable to acquire the very latest devices won’t be able to benefit from this innovation fully.
As for how it could benefit journalists, depth of field to footage would help close the gap further with the results from ‘big’ cameras. Purists though may still rail against the artificial computer-generated aspect and the fact that it can be adjusted in post. Equally, early results I’ve seen have on occasions been less than impressive with the blur failing altogether or being inconsistent especially around the edges of clothing and hair which is not a failing of “big” cameras.

Audio options in FaceTime

The audio for FaceTime calls also has new features which may too get incorporated into other apps in the coming weeks. Available via Control Centre again, users will see a new ‘Mic Mode’ tile which when pressed gives three choices: standard, voice isolation and wide spectrum.
The first should need little explanation; the second tries to suppress ambient noise as best as it can, to focus better on the person speaking; the last does the opposite, incorporating environmental sounds and other people speaking in the background in case you want the person you’re on a FaceTime call to be able to hear everything that’s happening in your surroundings.
Is this useful for journalists? While it’s never a bad thing for a speaker to have more clarity, the tests I’ve done indicate it’s of limited benefit but that could have been because there was too little or too much ambient noise where I was at the time.
My results echoed those of a colleague who tested it on the other end of a FaceTime call. We could hear the other person better with voice isolation on, although it sounded noticeably processed, almost artificial, in quality. Wide spectrum did indeed boost the background noise.
If there are several people on the same call, then Spatial Audio kicks in (again, not if your device is an iPhone X or older) where the audio sounds like it’s coming from where each person is on the call. Again, this is another one where the clever work from independent developers, taking on the new features and pushing it further in their own apps, could be key.

Other FaceTime features

Before leaving FaceTime, a few other innovations it is getting in iOS 15 are worth mentioning even if they could be viewed less as ‘innovations’ and more ‘catching up with what’s been possible for a while on other cross-platform video calling apps like Zoom, Skype and Facebook Messenger’.
There’s a ‘mute alert’ for those enjoyable moments when someone speaks while their mic is muted. Also, users can now make FaceTime calls to PCs and Android devices, not just to those in the Apple ecosystem, with end-to-end encryption. You can also now invite anyone to a FaceTime call with a link.
One suggestion from Apple is to send a FaceTime link via WhatsApp, but I’m trying to get my head around why anyone would send a FaceTime web link via WhatsApp, encouraging someone to join a FaceTime call…when they could do a video or audio call on WhatsApp itself?
Finally, one big feature touted in Apple’s original ‘here’s what’s in iOS 15’ keynote event won’t be available from day one: Share Play, where you can share a video you’re watching with someone else so you can enjoy it together over FaceTime.

Video playback options

When playing a video embedded on a website, three dots in the bottom right corner signify further options including the new ability to increase the speed at which the video plays, up to two times faster. It can also be slowed down to half-speed if you feel that’s absolutely necessary.

The options for adjusting video playback speed.

Video editing tweaks

For those editing their videos within iOS itself, rather than any 3rd party app or transferring the footage to a Mac, one welcome tweak makes this job a bit easier. Previously, editing a video caused it to shrink on the screen; now, tapping double-headed diagonal arrows will expand the video to full screen so you can see better what it looks like. You can even widen your fingers to expand the frame even more.

Videos were small when edited in iOS 14.
iOS 15 makes a video full screen for editing. 

EXIF data

There’s also more information available about videos as well as photos, as iOS 15 incorporates a feature long available via 3rd party apps – the EXIF data.
EXIF stands for Exchangeable Image File Format. Rather than needing to note down separately information about an image or video, such as camera exposure, date/time the image was captured, and even GPS location, it’s embedded in a special file alongside the media itself.
Before iOS 15, there was a time-consuming workaround to see the EXIF data, involving transferring an image to Files and then another dozen taps; numerous third party apps could do it too.
But it is all now directly visible within the Photos app (still known to many as ‘the Camera Roll’). Tapping on the (i) under the photo or video, or simply swiping up on it, will show which lens was used, the resolution, the size, ISO, shutter speed, frame rate and more.

How EXIF data is displayed for photos in iOS 15.
How EXIF data is displayed for videos in iOS 15.

The benefits of EXIF data

In addition, it’ll show the name of the app if it was taken with a 3rd party app, and tapping that name will result in all the media captured with that app being shown. You can also access that material another way, by searching for the app’s name.
For journalists, knowing the file size of a video can be beneficial as the size can gives an indication of how long it might take to upload, always bearing in mind there are numerous other factors in play here such as the speed of the connection.
It’s also worth pointing out that the file size of a video, along with other EXIF data, is already available for videos in the PNg library, by loading a video and tapping the (i)
Whether journalists can use the EXIF feature to verify the date and time when material was captured will depend on the method used to share it. WhatsApp strips the date and time from material, with the result that iOS only shows the date and time of receipt; if it’s uploaded to Dropbox, then downloaded and saved to Photos, then the metadata is retained and visible.
Finally, while the inbuilt EXIF data shows a lot of information, it doesn’t show everything. For example, with a video, it omits the bitrate which can be useful to know as it gives an indication of how much data is in the video – the higher the bitrate, the better. Transferring the file to a Mac will reveal a lot more info besides.
Another change relating to photos and videos has come about quite possibly as a direct result of how EXIF data is accessed. On a live photo, swiping up showed the options for adjustments such as looping it or bouncing it back and forth. Now that swiping up reveals the EXIF data, the Live Photo adjustments are now accessed from a drop down in the top left corner.

Live Text

Live Text is another change for the XS and newer; if you have an iPhone X or older, you can just read on in envy. Android users will also be reading on with a wry smile as the ‘new’ Live Text feature has long been available on many Android devices.
Go to ‘settings-camera’ and you’ll see a new ‘Live Text’ option. If you don’t want to use it, turn the green light off; but otherwise, turn it on and you’re good to go.
On compatible iPhones, the device can now ‘read’ text in photos, be that ones taken months or years ago and already in the Photos app or ones you’re about to take with the live camera. The text can be printed or handwritten too.
When you have the camera open, look on your screen to see if the live text icon appears.

If it doesn’t, you might need to move around until it does.
Once it’s visible, you’ll also get yellow brackets around the text that is now interactive. If the text you want to use isn’t within the brackets, move your phone around again until it is.
When ready, tap the Live Text icon and you’ll be able to select all or some of the text.

You’ll then be able to do things like copy it to paste into an email, or tap a phone number to call it, or start an email with the address in the ‘to’ field or even translate text into certain languages.
With photos already taken, the process can be even simpler depending on the text in question which the phone can “see”. If there’s a phone number or email address, simply tap it to use it; if that doesn’t work, a gentle tap elsewhere on the screen should bring up the Live Text icon which will definitely make the text in the photo interactive to use as suggested above.
It also works with handwriting, within reason.

The process of using Live Text to scan some terrible handwriting.

When Live Text can be useful

How might journalists use this? It’ll depend on the text in question, but the possibilities are huge. In addition to calling phone numbers or using email addresses as already suggested, you could tap an address to get directions to it. If there’s a time and date, tap it to add it to your calendar.
Or you might have been given a document and you need to use the text from it. Use the Live Text option to scan the words and you can instantly drop the text into an email rather than laboriously typing it out yourself – once you’ve checked it’s not missed out any words, such as ‘not’ from ‘my client will be pleading not guilty to all the charges’.

Visual Lookup

Staying with new tricks that can be done with photos, your device should soon be able to give you more information about what is actually in them through ‘Visual Lookup’.
A very similar feature has already been available on iPhones and Androids via the Google Lens app but it’s now being incorporated into iOS itself. Having said that, it only works on iPhones inside the USA and even there, not on on an iPhone X or older. But when it’s released beyond the borders of the USA, users with compatible devices should look for a small star on the (i) under photos.
That indicates that the feature is active and you can use it to identify a plant, a landmark or animal.

Visual Look Up correctly identifying Bath Abbey.

Finding photos

Another useful addition for Photos but one which isn’t limited to certain iPhones and in certain locations, like Visual LookUp, is the ability for your iPhone to find text in your images.
This is via the Spotlight search option, which is activated by a quick swipe down on any screen of your device. Once you’ve updated to iOS 15, your iPhone quietly scans all your photos for text they contain; Spotlight can now search for the text you ask it to find, and it’ll display the results.
I’ve found this quietly impressive, with Spotlight always returning the photo with the required text. So if you know you have a photo of a document with someone’s name in it, this is an efficient way of finding it rather than trawling all your photos.

Voice Memos

I’m going to omit some other photo and video changes as I don’t think they’re very journalistic (such as the Memories feature for content in your camera roll) so I’ll end this mojo-centric section with a sentence or two about Voice Memos, the iPhone’s audio recording app which now has a feature to skip silences on playback.
It does an ok job from what I’ve found, even if it’s a rather blunt and artificial way of shortening a recording. It’s also doesn’t work at all as an editing tool because when you share audio with the silences skipped (for example by AirDrop or email), the recipient gets the audio with all the silences back in as if you’ve done nothing to it. But if the length of audio is still too much for you, then iOS 15 lets you play it back at up to twice the speed.

Changes to Safari

Muscle memory can be important for many smartphone users: you know where certain apps are on your screen or you know exactly where the ‘reply’ button is on your favourite social media app. But hang on to your hats as iOS 15 brings in a big change to Safari, the main web browser on iPhones, meaning your muscles may have to relearn everything.
The URL address bar – where you enter a website’s address or a search term – has moved and now defaults to being at the bottom of the screen. For as long as anyone can remember, it’s been at the top.

In iOS 14, the URL bar was at the top but in iOS 15 it is at the bottom.

The thinking is that with so many of us having phones with larger screens, the address bar at the top was a strain to reach given our hands haven’t grown to match.
So, moving it lower down the screen makes it easier to access. My wife is admittedly something of a small sample size, but when I showed her Safari on iOS 15 (yes, the evenings just whizz by at our house) she immediately spotted the repositioned address bar and commented on how much easier it made it to use.
But even after using the beta of iOS 15 for several months, my fingers still twitch automatically towards the top of my screen.
All hope is not lost though if you want to return to things as they were, as there are two ways to do this: either by tapping on the aA on the address bar itself and then ‘show top address bar’ or navigate your way through ‘settings-safari-single tab’.

How to move the bar back to the top. 

That these options even exist is a concession by Apple as initial versions of Safari in iOS 15 had the bar at the bottom, like it or not. Such was the outcry that Apple moved enough to allow users to move the bar, even if the change wasn’t fully abandoned.
But the point remains that there’s nothing to tell users after they’ve upgraded that they can in fact return Safari to the top of the screen and I predict there’s going to be a lot of confusion over this change.
If you do like the new place for Safari, you’ll gain another feature that’s missing when it’s at the top: the option to swipe left and right between websites.

Tab groups in Safari

One new feature within Safari which many journalists could find useful is called a “tab group”.
Let’s say you’re working on a court case and you have numerous pages open relating to it; but you’re also planning a dinner party for friends and you have several pages of recipes open; and you’re also thinking ahead to a holiday and you’ve lots of hotel websites open. Instead of all these pages being jumbled up together, you can create a tab group and put only one set of pages into that group, not the others.
When you want to access just the court case pages, tap to open that group and they’ll all be accessible. It’s a bit like bookmarking a website but more efficient as all the tabs open as soon as you swap to the group.

Safari Extensions

Mac users have had extensions for Safari for years. These powerful little add-ons extend (hence the name) what can happen in Safari, and they’re now available for iOS. Once you’ve given an extension permission to interact with websites, how you use them to benefit your journalism will depend on the ones you install. 

Introducing Focus

Whether you just want to get on with your work or want to prevent phone calls interfering while you’re filming something, Do Not Disturb has long been a failsafe.
But now there’s a super-charged DND, known as Focus. It has replaced the DND tile in Control Centre and also within Settings.
Additionally, there’s a lot more you can do with it although it’s worth pointing out here that you don’t HAVE to use these new features. If DND was enough for you, just turn it on as before.
But for the more adventurous, you can do a lot more now as you’re presented with four default Focuses (Foci?) which can each be configured to your liking – and you can also make your own entirely new Focusessses.

The default Focus options all users have access to.

Once you’ve activated a Focus on one device, it syncs across to all devices with the same Apple ID. You could set up the “personal” one so friends and family can still send you notifications, while work would only let selected colleagues do that.
The fact you’re in a Focus can be shared with others so when they message you, the sender should understand why you’re not replying and that you’re not actually ignoring them. That may not be enough to placate and buy off a stressed output editor on all occasions though.

How a Focus can tell someone you’re silencing their notifications, and how they see that information.

Breaking through a Focus

If they really insist they need to be able to contact you at all times, you can tweak things so that their (but only their) notifications are allowed through. The same can be done with apps, as ones you choose can still send their notifications.
If someone hasn’t been put on that whitelist, then they have the option to tap “notify anyway” which bursts through a Focus – but it feels like this should only be used sparingly as doing it too often or unnecessarily could easily cause annoyance or offence.

The ‘notify anyway’ option could prove handy.

Time Sensitive notifications

Things can even be taken a stage further, in a potentially confusing way. There’s an additional setting called “time sensitive” where any app not on the allowed list is still allowed to send notifications marked as ‘time sensitive’, such as an appointment in your calendar. But, as the image below shows, when the first one of these comes through, you are offered control over whether you actually want these or not.

‘Time sensitive’ notifications can still be shown, even when in a Focus.

Focus and journalists

Where it can get really useful for journalists and others is the fact that with a Focus turned on, entire pages of apps can be temporarily hidden.
This means that if your iPhone is organised enough to have all personal or non-work apps on one screen and work ones on another, you can set up a Focus so that all the tempting personal apps just simply aren’t available to you on your device, leaving you to…focus on the work-related task in hand with the apps you do need access to.
But all is not lost – if temptation is too much to resist, all your apps are still accessible via the App Library.
There’s also a way that a Focus could provide a level of security for journalists caught in a tricky situation – although it’ll need a bit of forward planning and I can’t promise it’ll be 100% certain to keep you safe.
The scenario would be that you’re reporting from a location where police officers might be keen to have a look at your device. You could have a Focus called something bland like ‘DayTime’ and set it up such that when active, the screen on your iPhone which has all your reporting and communication apps, as well as your email and Photos, isn’t visible and instead your device only shows less problematic ones.
When you see someone in a uniform and gun approaching, quickly activate DayTime and they’ll only initially see the innocuous apps. If someone with a bit more knowledge spends more time looking through your device, the truth may soon become apparent especially as all your apps remain a swipe away in the App Library, so please don’t seek retribution on me once you’re eventually released from a tiny prison cell.
A Focus can also be triggered automatically based on location, with your device suggesting what it thinks is the most appropriate. This one was flagged up to me when my iPhone detected I’d come home after being out:

A geo-located prompt about a Focus.

For power-user journalists, you can even trigger a Focus when you open an app.
I’ve set my iPhone up to do this. Combined with a personal automation, which triggers things like putting it into Airplane Mode and increasing the screen brightness to 100%, this means that I only need to open FilmicPro and I can use the app to gather content knowing I shouldn’t get any interruptions.
Other options for triggering a Focus are ‘at a certain time’, so if you have regular planning meetings each morning for an hour from 0800, the particular Focus will automatically come on at that time for that long; or ‘at a location’ so it’ll be triggered when you arrive at work before deactivating once you leave.
If all of this seems too much, then just carry on using Do Not Disturb as before.

Notifications

Related to Focus are Notifications and these get tweaked too, with changes to how they look and also a new option of having them all delivered en masse at 8am and 6pm or other times to suit you. While I can see that some users may benefit from only seeing notifications at a particular time, it feels that news journalists in particular may need to know them a bit sooner than that.

Hide My Email

Each iOS release contains very technical bug fixes and security updates which take place in the background and over which you have no control. But others are more openly available to users and iOS 15 has its fair share of these. 
Giving out your email address to all and sundry, or using it when you sign up for apps or websites, may be something you’re totally fine with but it’d be understandable if many journalists would be less than comfortable doing this.
This is where ‘Hide My Email’ in iOS 15 could be useful although it’s important to point out that it’s not available to all, only to users who pay for iCloud storage, through a new service called iCloud+. If you’re still using the free, basic level of 5gb storage then you can’t use Hide My Email but if you pay you’re automatically upgraded.
For those who are on iCloud+, you’ll get offered the option to use a randomly-generated email address which then links directly to your own one.

A randomly-generated email address via Hide My Email.

The app or retailer never gets to see who you really are, yet you receive their emails and are able to use their services. You can also actively create your own unique email address by going to ‘Settings – Apple ID – iCloud – HideMyEmail’ on your iPhone.
Staying with emails, but something which applies to all users not just those with iCloud+ is ‘Mail Privacy Protection’.
When you first open the default email app after updating to iOS 15, you’ll see this screen:

The options for Mail Privacy Protection.

Mail Privacy Protection

The intention here is to give users some privacy when it comes to how companies and advertisers track you when interact with their emails. Usually, tracking pixels and other identifiers are sent when you open the email with information about where you are, the time and IP address.
With Mail Privacy Protection activated, your IP address is hidden and all content is loaded privately in the background, giving you an extra layer of privacy.

Private Relay

Back to a feature only available to those with iCloud+ but one which journalists may benefit from using called Private Relay. It’s available in iOS 15 even though it is still described by Apple as being in “beta”.
Private Relay is like a Virtual Private Network (a VPN), in that it obscures your IP address so you’re able to browse sites which might otherwise be inaccessible to you for example ones restricted by geography or content. You can choose to increase your anonymity by setting it to use the country and time zone you’re in, or have it maintain your general location so you can still see things which are local like restaurants and shops. 
Private Relay isn’t quite as powerful as a VPN though, so don’t plan on using it to watch NetflixUS. Instead, it encrypts your browsing on sites without that little padlock on the URL bar, as well as hiding your real IP address.
This means that the site you’re looking at won’t know it’s you and nor will Apple. It works like this: your traffic is sent to an Apple server and then the IP address, which can be used to locate you, is removed. Your request for a website is then sent to another server where it’s given a temporary IP address before going on to the website you’ve requested. This should mean websites can’t build up a profile about your browsing history and therefore build a profile of you more generally. There are more secure ways of doing all this and so if you do really need proper protection and anonymity, then I wouldn’t rely on Private Relay. Being in beta, it isn’t totally reliable which isn’t ideal given what it is trying to do. I’ve found that Private Relay doesn’t work at all on my home wifi and only functions on 4G.

Record App Activity

iOS 14 brought a change whereby a small green light would be visible when an app was using your camera and an orange one when your mic was being used. iOS 15 goes further, with a new tool which will show which apps and sites are accessing a much wide range of features and data.
It’s somewhat buried, but you can find it in ‘Settings-Privacy-Record App Activity’ and then it needs to be turned on.
After a week of logging, you’ll be able to access a summary of when your apps did what, as well as what those apps did with your data and the sites they subsequently contacted. This though is another feature which was showcased when iOS 15 was unveiled and which has yet to make it into the hands of users. 
Finally, a few changes which don’t fall easily into a particular category.

Longer Siri

If you’re the kind of journalist who likes to dictate your copy or script, then iOS 15 has removed the limit of how long Siri will listen to you before cutting out. It was capped at 60 seconds but now can keep going well beyond that. Tests I did showed Siri was fine at three minutes, although talking at speed did continue to be a challenge for it. It’s possible that this function could work as an automatic transcription service – open a note, turn on Siri and let it transcribe a speech or a press conference. I think it would be wise to also have a recording running at the same time, in case the transcription fails for some reason and also to give you the option to check it for accuracy.

Find My iPhone

Having a fully charged device is a pre-requisite for any journalist but if you’re the type who occasionally lets theirs run fully down and then mislays it, there’s renewed hope for you – as long as you have an iPhone 11 or newer (although not an SE 2020).
With iOS 15, you can still trace your device even when it’s out of power as ‘out of power’ seems to mean something slightly different as your device remains in a very low-power state. This means any nearby iOS device can see the Bluetooth signal it emits and send back its location to help you find it.

Notify When Left Behind

Some people are fortunate enough to have more than one iPhone and if they’re careless enough to forget to take one with them, a new alert will flag that up on their other devices. Called ‘Notify When Left Behind’, it’ll push a notification to the device you have remembered to take with you, as long as it’s on the same Apple ID and you’ve set the service up within the Find My app.

The new “Notify When Left Behind” alert (I am very forgetful).

If and when you get this alert, go to ‘Settings-Apple ID-Find My’ and then into ‘Find My iPhone’ and then ensure all three lights are on as this ensures the new feature is active. Remember though, you can’t do this after the fact so it might even be advisable to turn this setting on right now.

At-a-glance information

One frustration of iOS 14 for me was that when my device was in Do Not Disturb, iPhones with a notch like the X or 11 wouldn’t show the tell-tale crescent moon on the main screen. This meant I had no immediate visual confirmation of the status of my device. On devices without a notch, there was space for the moon.

A notch-less iPhone wouldn’t show the ‘Do Not Disturb’ crescent moon icon in iOS 14.

But in iOS 15, the icon is visible whichever Focus you’re in and I think that’s a useful improvement.

An iPhone in Work, Filming, Sleep and Do Not Disturb focus.

Bigger text where you want it

If you wanted larger text on previous versions of iOS, you either had to enable the feature for EVERYTHING on your device, or not at all whereas iOS 15 lets you do it per app.
Go to Control Centre in Settings and enable the “text size” option. Now, when you’re in an app where you need to adjust the size, slide to open the Control Centre panel and then press and hold on the aA icon.
In the bottom left it’ll give the name of the app currently open under Control Centre, as well as showing a slider to increase or decrease the font size.

Conclusion

These are the useful and interesting changes I’ve found from beta testing iOS 15 over the last few months. You might find others you like (or dislike) based on how you yourself use your device after you’ve upgraded. Or you may feel, having read this, that you’re happy with what iOS 14 can do and you’ll be fine only taking the bug fixes offered by Apple. For the first time ever, that choice is open to you.

#48 Is ProRes video recording coming to the next iPhone and is it a big deal? — 30. August 2021

#48 Is ProRes video recording coming to the next iPhone and is it a big deal?

ProRes logo and iPhone12 Pro Max image: Apple.

One of the things that always surprised me about Apple’s mobile operating system iOS (and now also iPadOS) was the fact that it wasn’t able to work with Apple’s very own professional video codec ProRes. ProRes is a high-quality video codec that gives a lot of flexibility for grading in post and is easy on the hardware while editing. Years ago I purchased the original Blackmagic Design Pocket Cinema Camera which can record in ProRes and I was really looking forward to having a very compact mobile video production combo with the BMPCC (that, unlike the later BMPCC 4K/6K was actually pocketable) and an iPad running LumaFusion for editing. But no, iOS/iPadOS didn’t support ProRes on a system level so LumaFusion couldn’t either. What a bummer.

Most of us will be familiar with video codecs like H.264 (AVC) and the more recent H.265 (HEVC) but while these have now become ubiquitous “all-in-one” codecs for capturing, editing and delivery of video content, this wasn’t always so. Initially, H.264 was primarily meant to be a delivery codec for a finished edit. It was not supposed to be the common editing codec – and for good reason: The high compression rate required powerful hardware to decode the footage when editing. I can still remember how the legacy Final Cut Pro on my old Mac was struggling with H.264 footage while having no problems with other, less compressed codecs. The huge advantage of H.264 as a capturing codec however is exactly the high compression because it means that you can record in high resolution and for a long time while still having relatively small file sizes which was and still is crucial for mobile devices where storage is precious. ProRes is basically the opposite: You get huge file sizes for the same recording but it’s less taxing on the editing hardware because it’s not as heavily compressed as H.264. From a quality standpoint, it’s capturing more and better color information and is therefore more robust and flexible when you apply grading in post production.

Very recently, Marc Gurman published a Bloomberg article that claims (based on info from inside sources) that the next flagship iPhone will have the ability to capture video with the ProRes codec. This took me quite by surprise given the aforementioned fact that iOS/iPadOS doesn’t even “passively” support ProRes at this point but if it turns out to be true, this is quite a big deal – at least for a certain tribe among the mobile video creators crowd, namely the mobile filmmakers. 

I’m not sure so-called “MoJos” (mobile journalists) producing short current news reports on smartphones would necessarily have to embrace ProRes as their new capture codec since their workflow usually involves a fast turn-around without spending significant time on extensive color grading, something that ProRes is made for. The lighter compression of ProRes might also not be such a big deal for them since recent iPhones and iPads can easily handle 4K multi-track editing of H.264/H.265 encoded footage. On the other hand, the downside of ProRes, very big file sizes, might actually play a role for MoJos since iPhones don’t support the use of SD cards as exchangeable and cheap external storage. Mobile filmmakers however might see this as a game-changer for their line of work, as they usually offload and back-up their dailies externally before going back on set and also spend a significant amount of time in post with grading later on.

Sure, if you are currently shooting with an app like Filmic Pro and use their “Filmic Extreme” bitrate, ProRes bitrates might not even shock you that much but the difference to standard mobile video bitrates is quite extreme nonetheless. To be more precise, the ProRes codec is not a single standard but comes in different flavors (with increasing bitrate): ProRes Proxy, ProRes LT, ProRes 422 (the “422” indicates its chroma subsampling), ProRes 422 HQ, ProRes 4444, ProRes 4444 XQ. ProRes 422 can probably be regarded as the “standard” ProRes. If we look at target bitrates for 1080p FHD in this case, it’s 122 Mbit/s for 25fps and 245Mbit/s for 50fps. Moving on to UHD/4K things are really getting enormous with 492Mbit/s for 25fps and 983Mbit/s for 50fps. A 1-minute clip of ProRes 422 UHD 25fps footage would be 3.69GB, A 1-minute clip of ProRes 422 UHD 50fps would be 7.37GB. It’s easy to see why limited internal storage can easily and quickly become a problem here if you shoot lots of video. So I personally would definitely consider it a great option to have but not exactly a must for every job and situation. Of course I would expect ProRes also to be supported for editing within the system from then on. For more info on the ProRes codec and its bitrates, check here.

At this point the whole thing is however NOT officially confirmed by Apple but only (informed) speculation and until recently I would have heavily doubted the probability of this actually happening. But the fact that Apple totally out of the blue introduced the option to record with a PAL frame rate in the native camera app earlier this year, something that by and large only video pros really care about, gives me the confidence that Apple might actually pull this off for real, maybe in the hope of luring in well-known filmmakers that boost the iPhone’s reputation as a serious filmmaking tool. What do you guys think? Will it really happen and would it be a big deal for you?

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#47 Videomakers please stop doing this! — 30. July 2021

#47 Videomakers please stop doing this!

Photo: SHVETS production / Pexels

Ok, today I have something a little different from the usual blog fare around here: a quick and dirty rant, maybe just a little bit tongue-in-cheek. I beg your pardon. I will only shame the deed, not name any perpetrators. You will probably have come across it and either noticed it consciously or subconsciously. Most likely on YouTube. There’s also a good chance you might disagree with what I am about to say. So be it. Now what am I talking about? 

The act of vlogging has risen to big stardom in the wake of moderately fast internet and affordable cameras. It often involves a person directly addressing the camera to tell us something – a casual piece-to-camera so to speak. Addressing the camera basically means addressing us as an audience. Maybe they’re talking about a political topic, about lip-gloss, their ongoing travels to exotic places – or why you should/shouldn’t buy this new exciting smartphone that just came out etc. etc. This is all fine. Here’s looking at you kid, I can take it all day long if need be – well if you have something interesting to say anyway …

What really annoys me though is the fact that an increasing number of creators (oh that’s a fancy word these days!) feel the absolute need to cross-cut their into-the-camera shot with one from the side where they most obviously do not look directly into the lens of the camera but way off. I can’t help it, I always find this extremely irritating, it makes me lose my focus on what’s being said. For me it feels like someone is talking to me, telling me something, looking me in the eye and then at some point he or she just starts looking somewhere else while still talking to me. Like if they see someone they know walking by, following them with their eyes but keep talking to you. Don’t get me wrong, this technique can be used to great effect the other way round in movies when a character breaks the so-called “fourth wall” and directly looks into the camera at some point. Marc Vernet has written an interesting article about it called “The Look at the Camera”. Some of the most memorable cases that come to mind for me personally would probably include “A Clockwork Orange”, “The Silence of the Lambs” and “American Beauty”. And no, your name doesn’t have to be Stanley Kubrick, Jonathan Demme or Sam Mendes to be allowed to do that.

But your reason for the switch between having someone look directly into the camera and then past in the subsequent shot should have purposeful artistic value, it shouldn’t just be used out of a imagined need to have a different shot from a different angle because the main one is perceived as too boring to work all the way through. Something that sort of works for me if the videomaker absolutely wants to have a little bit of change in the visual composition is to use the main camera shot but crop in – very easy if you shoot in 4K but deliver in 1080p. By doing this, you still keep the continuity of looking directly into the camera. It’s also possible to cut in b-roll where the talking person is not seen at all. It’s also a different story if there are other persons involved. But if it’s a single person going back and forth between shots having the presenter look into the camera and then not without good reason is a nuisance – at least for me, at least at this point in time.

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#46 Top tips for smartphone videography in the summer — 28. June 2021

#46 Top tips for smartphone videography in the summer

Photo: Julia Volk via Pexels.com

It’s the dog days of summer again – well at least if you live in the northern hemisphere or near the equator. While many people will be happy to finally escape the long lockdown winter and are looking forward to meeting friends and family outside, intense sunlight and heat can also put extra stress on the body – and it makes for some obvious and less obvious challenges when doing videography. Here are some tips/ideas to tackle those challenges.

Icon: Alexandr Razdolyanskiy via The Noun Project

Find a good time/spot!
Generally, some of the problems mentioned later on can be avoided by picking the right spot and/or time for an outdoor shoot during the summertime. Maybe don’t set up your shot in the middle of a big open field where you and your phone are totally exposed to the full load of sunshine photons at high noon. Rather, try to shoot in the morning, late afternoon or early evening and also think about picking a spot in the shadows. Or choose a time when it’s slightly overcast. Of course it’s not always possible to freely choose time and spot, sometimes you just have to work in difficult conditions.

„Bum to the sun“ – yes or no?
There’s a saying that you should turn your „bum to the sun“ when shooting video. This definitely holds some truth as pointing the lens directly towards the sun can cause multiple problems, including unwanted lens flare effects, underexposed faces or a blown out background. You can however also create artistically interesting shots that way (silhouettes for instance) and the „bum to the sun“ motto comes with problems of its own: If you are shooting away from the sun but the person you are filming is looking directly towards it, they could be blinded by the intense sunlight and squint their eyes which doesn’t look very favorable. If the sun is low you also might have your own shadow in the shot. So I think the saying is something to take into consideration but shouldn’t be adhered to exclusively and in every situation.

Check the sky!
Clouds can severely impact the amount of sunlight that reaches the ground. So if you have set up an interview or longer shot and locked the exposure at a given time when there isn’t a single cloud in front of the sun, there might be a nearby one crawling along already that will take away lots of light later on and give you an underexposed image at some point. Or vice versa. So either do your thing when there are no (fast moving) clouds in the vicinity of the sun or when the cloud cover will be fairly constant for the next minutes.

Use an ND filter!
As I pointed out in my last blog post The Smartphone Camera Exposure Paradox, a bright sunny day can create exposure problems with a smartphone if you want to work with the „recommended“ (double the frame rate, for instance 1/50s at 25fps) or an acceptable shutter speed because phones only have a fixed, wide-open aperture. Even with the lowest ISO setting, you will still have to use a (very) fast shutter speed that can make motion appear jerky. That’s why it’s good to have a neutral density (ND) filter in your kit which reduces the amount of light that hits the sensor. There are two different kinds of ND filters: fixed and variable. The latter one lets you adjust the strength of the filtering effect. Unlike with dedicated regular cameras, the lenses on smartphones don’t have a filter thread so you either have to use some sort of case or rig with a filter thread or a clip-on ND filter.

Shoot LOG! (Well, maybe…)
Some 3rd party video recording apps and even a few native camera apps allow you to shoot with a LOG picture profile. A log profile distributes exposure and color differently, in a logarithmic rather than a linear curve, across the respective spectra compared to a „normal“ non-log image profile. By doing this you basically gain a bit more dynamic range (the range spanning between the brightest and darkest areas of an image) which can be very useful in high-contrast scenarios like a sunny day with extreme highlights and shadows. It also gives you more flexibility for grading in post to achieve the look you want. This however also comes with some extra work as pure log footage can look rather dull/flat and might need grading to look „pretty“ as a final result. It is possible though to apply so-called LUTs (simply put: a pre-defined set of grading parameters) to log footage to reduce/avoid time for manual grading.

Get a white case!


Ever heard of the term “albedo“? It designates the amount of sunlight (or if you want to be more precise: solar radiation) that is reflected by objects. Black objects reflect less and absorb more solar radiation (smaller albedo) than white objects (higher albedo). You can easily get a feeling for the difference by wearing a black or a white shirt on a sunny day. Similarly, if you expose a black or dark colored phone to intense sunlight, it will absorb more heat than a white or light colored phone and therefore be more prone to overheating. So if you do have a black or dark colored phone, it might a good idea to get yourself a white case so more sunlight is reflected off of the device. Vice versa, if you have a white or light colored phone with a black case, take it off. Be aware though that a white case only reduces the absorption of „external heat“ by solar radiation, not internal heat generated by the phone itself, something that particularly happens when you shoot in 4K/UHD, high frame rates or bit rates. You should also take into consideration that a case that fits super tight might reduce the phone’s ability to dispense internal heat. Ergo: A white phone (case) only offers some protection against the impact of direct solar radiation, not against internal heat produced by the phone itself or high ambient temperatures.

Maximize screen brightness!
This is pretty obvious. Of course bright conditions make it harder to see the screen and judge framing, exposure and focus so it’s good to crank up the screen brightness. Some camera apps let you switch on a feature that automatically maximizes screen brightness when using the app.

Get a power bank!
Maximizing screen brightness will significantly increase battery consumption though so you should think about having a back-up power bank at hand – at least if you are going on a longer shoot. But most of us already have one or two so this might not even be an additonal purchase.

Use exposure/focus assistants of your camera app!
One thing that can be very helpful in bright conditions when it’s hard to see the screen are analytical assistant tools in certain camera apps. While there are very few native camera apps that offer some limited assistance in this respect, it’s an area where dedicated 3rd party apps like Filmic Pro, mcpro24fps, ProTake, MoviePro, Mavis etc. can really shine (pardon the pun). For setting the correct exposure you can use Zebra (displays stripes on overexposed areas of the frame) or False Color (renders the image into solid colors identifying areas of under- and overexposure – usually blue for underexposure and red for overexposure). For setting the correct focus you can use Peaking (displays a colored outline on things in focus) and Magnification (digitally magnifies the image). Not all mentioned apps offer all of the mentioned tools. And there’s also a downside: Using these tools puts extra stress on your phone’s chipset which also means more internal heat – so only use them when setting exposure and focus for the shot, turn them off once you are done.

Photo: Moondog Labs

Use a sun hood!
Another way to better see the screen in sunny weather is to use a sun hood. There are multiple generic smartphone sun hoods available online but also one from dedicated mobile camera gear company MoondogLabs. Watch out: SmallRig, a somewhat renowned accessory provider for independent videography and filmmaking has a sun hood for smartphones in its portfolio but it’s made for using smartphones as a secondary device with regular cameras or drones so there’s no cut-out for the lens or open back which renders it useless if you want to shoot with your phone. This cautionary advice also applies to other sun hoods for smartphones.

Photo: RollCallGames

Sweaty fingers?
An issue I encountered last summer while doing a bike tour where I occasionally would stop to take some shots of interesting scenery along the road was that sweaty hands/fingers can cause problems with a phone’s touch screen. Touches aren’t registered or at the wrong places. This can be quite annoying. Turns out that there’s such a thing as „anti-sweat finger sleeves“ which were apparently invented for passionate mobile gamers. So I guess kudos to PUBG and Fortnite aficionados? There’s also another option: You can use a stylus or pen to navigate the touch screen. Users of the Samsung Galaxy Note series are clearly at an advantage here as the stylus comes with the phone.

Photo: George Becker via Pexels.com

Don’t forget the water bottle!
Am I going to tell you to cool your phone with a refreshing shower of bottled drinking water? Despite the fact that many phones nowadays offer some level of water-resistance, the answer is no. I’m including this tip for two reasons: First, it’s always good to stay hydrated if you’re out in the sun – I have had numerous situations where I packed my gear bag with all kinds of stuff (most of which I didn’t need in the end) but forgot to include a bottle of water (which I desperately needed at some point). Secondly, you can use a water bottle as an emergency tripod in combination with a rubber band or hair tie as shown in workshops by Marc Settle and Bernhard Lill. So yes, don’t forget to bring a water bottle!

Got other tips for smartphone videography in the summertime? Let us know!

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

Welcome to smartfilming.blog! — 21. May 2021

Welcome to smartfilming.blog!

If you want to learn about how smartphones and other compact mobile cameras can be powerful and fascinating tools for videography, you have come to the right place! I’m covering a variety of aspects on this topic including mobile devices/cameras, operating systems, apps, accessories and the art of mobile videography, particularly what I like to call “phoneography”. This knowledge can be very useful for a whole range of professional and/or hobby videography enthusiasts and visual storytellers: mobile journalists, smart(phone) filmmakers, vloggers, YouTubers, social media content creators, business or NGO marketing experts, teachers, educators or hey, even if you’re “just” doing home movies for your family! Your phone is a mighty media production power house, learn how to unleash and wield it, right here on smartfilming.blog!

Feel free to connect with me on other platforms (click on the icons):

For a complete list of all my blog articles, click here.

To get in touch via eMail, click here.

To donate to this cost & ad-free blog via PayPal, click here.

#45 The Smartphone Camera Exposure Paradox — 11. May 2021

#45 The Smartphone Camera Exposure Paradox

Ask anyone about the weaknesses of smartphone cameras and you will surely find that people often point towards a phone’s low-light capabilities as the or at least one of its Achilles heel(s). When you are outside during the day it’s relatively easy to shoot some good-looking footage with your mobile device, even with budget phones. Once it’s darker or you’re indoors, things get more difficult. The reason for this is essentially that the image sensors in smartphones are still pretty small compared to those in DSLMs/DLSRs or professional video/cinema cameras. Bigger sensors can collect more photons (light) and produce better low light images. A so-called “Full Frame” sensor in a DSLM like Sony’s Alpha 7-series has a surface area of 864 mm2, a common 1/2.5” smartphone image sensor has only 25 mm2. So why not just put a huge sensor in a smartphone? While cameras in smartphones have undeniably become a very important factor, the phone is still very much a multi-purpose device and not a single-purpose one like a dedicated camera – for better or worse. That means there are many things to consider when building a phone. I doubt anyone would want a phone with a form factor that doesn’t allow you to put the phone in your pocket. And the flat form factor makes it difficult to build proper optics with larger sensors. Larger sensors also consume more power and produce more heat, not exactly something desirable. If we are talking about smartphone photography from a tripod, some of the missing sensor size can be compensated for with long exposure times. The advancements in computational imaging and AI have also led to dedicated and often quite impressive photography “Night Modes” on smartphones. But very long shutter speeds aren’t really an option for video as any movement appears extremely blurred – and while today’s chipsets can already handle supportive AI processing for photography, more resource-intensive videography is yet a bridge too far. So despite the fact that latest developments signal that we’re about to experience a considerable bump in smartphone image sensor sizes (Sony and Samsung are about to release a 1-inch/almost 1-inch image sensor for phones), one could say that most/all smartphone cameras (still) have a problem with low-light conditions. But you know what? They also have a problem with the exact opposite – very bright conditions!

If you know a little bit about how cameras work and how to set the exposure manually, you have probably come across something called the “exposure triangle”. The exposure triangle contains the three basic parameters that let you set and adjust the exposure of a photo or video on a regular camera: Shutter speed, aperture and ISO. In more general terms you could also say: Time, size and sensitivity. Shutter speed signifies the amount of time that the still image or a single frame of video is exposed to light, for instance 1/50 of a second. The longer the shutter speed, the more light hits the sensor and the brighter the image will be. Aperture refers to the size of the iris’ opening through which the light passes before it hits the sensor (or wayback when the film strip), it’s commonly measured in f-stops, for instance f/2.0. The bigger the aperture (= SMALLER the f-stop number), the more light reaches the sensor and the brighter the image will be. ISO (or “Gain” in some dedicated video cameras) finally refers to the sensitivity of the image sensor, for instance ISO 400. The higher the ISO, the brighter the image will be. Most of the time you want to keep the ISO as low as possible because higher sensitivity introduces more image noise. 

So what exactly is the problem with smartphone cameras in this respect? Well, unlike dedicated cameras, smartphones don’t have a variable aperture, it’s fixed and can’t be adjusted. Ok, there actually have been a few phones with variable aperture, most notably Samsung had one on the S4 Zoom (2013) and K Zoom (2014) and they introduced a dual aperture approach with the S9/Note9 (2018), held on to it for the S10/Note 10 (2019) but dropped it again for the S20/Note20 (2020). But as you can see from the very limited selection, this has been more of an experiment. The fixed aperture means that the exposure triangle for smartphone cameras only has two adjustable parameters: Shutter speed and ISO. Why is this problematic? When there’s movement in a video (either because something moves within the frame or the camera itself moves), we as an audience have become accustomed to a certain degree of motion blur which is related to the used shutter speed. The rule of thumb applied here says: Double the frame rate. So if you are shooting at 24fps, use a shutter speed of 1/48s, if you are shooting at 25fps, use a shutter speed of 1/50s, 1/60s for 30fps etc. This suggestion is not set in stone and in my humble opinion you can deviate from it to a certain degree without it becoming too obvious for casual, non-pixel-peeping viewers – but if the shutter speed is very slow, everything begins to look like a drug-induced stream of consciousness experience and if it’s very fast, things appear jerky and shutter speed becomes stutter speed. So with the aperture being fixed and the shutter speed set at a “recommended” value, you’re left with ISO as an adjustable exposure parameter. Reducing the sensitivity of the sensor is usually only technically possible down to an ISO between 50 and 100 which will still give you a (heavily) overexposed image on a sunny day outside. So here’s our “paradox”: Too much available light can be just as much of an issue as too little when shooting with a smartphone.

What can we do about the two problems? Until significantly bigger smartphone image sensors or computational image enhancement for video arrives, the best thing to tackle the low-light challenge is to provide your own additional lighting or look for more available light, be it natural or artificial. Depending on your situation, this might be relatively easy or downright impossible. If you are trying to capture an unlit building at night, you will most likely not have a sufficient amount of ultra-bright floodlights at your hand. If you are interviewing someone in a dimly lit room, a small LED might just provide enough light to keep the ISO at a level without too much image noise.

Clip-on variable ND filter

As for the too-much-light problem (which ironically gets even worse with bigger sensors setting out to remedy the low-light problems): Try to pick a less sun-drenched spot, shoot with a faster shutter-speed if there is no or little action in the shot or – and this might be the most flexible solution – get yourself an ND (neutral density) filter that reduces the amount of light that passes through the lens. While some regular cameras have inbuilt ND filters, this feature has yet to appear in any smartphone, although OnePlus showcased a prototype phone last year that had something close to a proper ND filter, using a technology called “electrochromic glass” to hide the lens while still letting (less) light pass through (check out this XDA Developers article). So until this actually makes it to the market and proves to be effective, the filter has to be an external one that is either clipped on or screwed on if you use a dedicated case with a corresponding filter thread. You also have the choice between a variable and a non-variable (fixed density) ND filter. A variable ND filter will let you adjust the strength of its filtering effect which is great for flexibility but also have some disadvantages like the possibility of cross-polarization. If you want to learn more about ND filters, I highly recommend checking out this superb in-depth article by Richard Lackey.

So what’s the bigger issue for you personally? Low-light or high-light? 

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#44 Split channels (dual mono) audio from the Rode Wireless Go II in LumaFusion — 4. May 2021

#44 Split channels (dual mono) audio from the Rode Wireless Go II in LumaFusion

Rode just recently released the Wireless GO II, a very compact wireless audio system I wrote about in my last article. One of its cool features is that you can feed two transmitters into one receiver so you don’t need two audio inputs on your camera or smartphone to work with two external mic sources simultaneously. What’s even cooler is that you can record the two mics into separate channels of a video file with split track dual mono audio so you are able to access and mix them individually later on which can be very helpful if you need to make some volume adjustments or eliminate unwanted noise from one mic that would otherwise just be “baked in” with a merged track. There’s also the option to record a -12dB safety track into the second channel when you are using the GO II’s “merged mode” instead of the “split mode” – this can be a lifesaver when the audio of the original track clips because of loud input.

If you use a regular camera like a DSLM, it’s basically a given that you can record in split track dual mono and it also isn’t rocket science to access the two individual channels on a lot of desktop editing software. If you are using the GO II with a smartphone and even want to finish the edit on mobile afterwards, it’s a bit more complicated.

First off, if you want to make use of split channels or the safety channel, you need to be able to record a video file with dual track audio, because only then do you have two channels at your disposal, two channels that are either used for mic 1 and mic 2 or mic 1+2 combined and the safety channel in the case of the Wireless Go II. Most smartphones and camera apps nowadays do support this though (if they support external mics in general). The next hurdle is that you need to use the digital input port of your phone, USB-C on an Android device or the Lightning port on an iPhone/iPad. If you use the 3.5mm headphone jack (or an adapter like the 3.5mm to Lightning with iOS devices), the input will either create single channel mono audio or send the same pre-mixed signal to both stereo channels. So you will need a USB-C to USB-C cable for Android devices (Rode is selling the SC-16 but I also made it work with another cable) and a USB-C to Lightning cable for iOS devices (here the Rode SC-15 seems to be the only compatible option) to connect the RX unit of the GO II to the mobile device. Unfortunately, such cables are not included with the GO II but have to be purchased separately. A quick note: Depending on what app you are using, you either need to explicitly choose an external mic as the audio input in the app’s settings or it just automatically detects the external mic.

Once you have recorded a dual mono video file including separate channels and want to access them individually for adjustments, you also need the right editing software that allows you to do that. On desktop, it’s relatively easy with the common prosumer or pro video editing software (I personally use Final Cut Pro) but on mobile devices there’s currently only a single option: LumaFusion, so far only available for iPhone/iPad. I briefly thought that KineMaster (which is available for both Android and iOS) can do it as well because it has a panning feature for audio but it’s not implemented in a way that it can actually do what we need it to do in this scenario.

So how do you access the different channels in LumaFusion? It’s actually quite simple: You either double-tap your video clip in the timeline or tap the pen icon in the bottom toolbar while having the clip selected. Select the “Audio” tab (speaker icon) and find the “Configuration” option on the right. In the “Channels” section select either “Fill From Left” or “Fill From Right” to switch between the channels. If you need to use both channels at the same time and adjust/balance the mix you will have to detach the audio from the video clip (either triple-tap the clip or tap on the rectangular icon with an audio waveform), then duplicate the audio (rectangular icon with a +) and then set the channel configuration of one to “Fill From Left” and for the other to “Fill From Right”.

Here’s hoping that more video editing apps implement the ability to access individual audio tracks of a video file and that LumaFusion eventually makes it to Android.

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#43 The Rode Wireless Go II review – Essential audio gear for everyone? — 20. April 2021

#43 The Rode Wireless Go II review – Essential audio gear for everyone?

Australian microphone maker RØDE is an interesting company. For a long time, the main thing they had going for them was that they would provide an almost-as-good but relatively low-cost alternative to high-end brands like Sennheiser or AKG and their established microphones, thereby “democratizing” decent audio gear for the masses. Over the last years however, Rode grew from “mimicking” products of other companies to a highly innovative force, creating original products which others now mimicked in return. Rode was first to come out with a dedicated quality smartphone lavalier microphone (smartLav+) for instance and in 2019, the Wireless GO established another new microphone category: the ultra-compact wireless system with an inbuilt mic on the TX unit. It worked right out of the box with DSLMs/DSLRs, via a TRS-to-TRRS or USB-C cable with smartphones and via a 3.5mm-to-XLR adapter with pro camcorders. The Wireless GO became an instant runaway success and there’s much to love about it – seemingly small details like the clamp that doubles as a cold shoe mount are plain ingenuity. The Interview GO accessory even turns it into a super light-weight handheld reporter mic and you are also able to use it like a more traditional wireless system with a lavalier mic that plugs into the 3.5mm jack of the transmitter. But it wasn’t perfect (how could it be as a first generation product?). The flimsy attachable wind-screen became sort of a running joke among GO users (I had my fair share of trouble with it) and many envied the ability of the similar Saramonic Blink 500 series (B2, B4, B6) to have two transmitters go into a single receiver – albeit without the ability for split channels. Personally, I also had occasional problems with interference when using it with an XLR adapter on bigger cameras and a Zoom H5 audio recorder.

Now Rode has launched a successor, the Wireless GO II. Is it the perfect compact wireless system this time around?

The most obvious new thing about the GO II is that the kit comes with two TX units instead of just one – already know where we are headed with this? Let’s talk about it in a second. A first look at the Wireless GO II’s RX and TX units doesn’t really reveal anything new – apart from the fact that they are labled “Wireless GO II”, the form factor of the little black square boxes is exactly the same. That’s both good and maybe partly bad I guess. Good because yes, just like the original Wireless GO, it’s a very compact system, “partly bad” because I suppose some would have loved to see the TX unit be even smaller for using it standalone as a clip-on with the internal mic and not with an additional lavalier. But I suppose the fact that you have a mic and a transmitter in a single piece requires a certain size to function at this point in time. The internal mic also pretty much seems to be the same, which isn’t a bad thing per se, it’s quite good! I wasn’t able to make out a noticeable difference in my tests so far but maybe the improvements are too subtle for me to notice – I’m not an audio guy. Oh wait, there is one new thing on the outside: A new twist-mechanism for the wind-screen – and this approach actually works really well and keeps the wind-screen in place, even if you pull on it. For those of us who use it outdoors, this is really a big relief.

But let’s talk about the new stuff “under the hood”, and let me tell you, there’s plenty! First of all, as hinted at before, you can now feed two transmitters into one receiver. This is perfect if you need to mic up two persons for an interview. With the original Wireless GO you had to use two receivers and an adapter cable to make it work with a single audio input.

It’s even better that you can choose between a “merged mode” and a “split mode”. The “merged mode” combines both TX sources into a single pre-mixed audio stream, “split mode” sends the two inputs into separate channels (left and right on a stereo mix, so basically dual mono). The “split mode” is very useful because it allows you to access and adjust both channels individually afterwards – this can come in handy for instance if you have a two-person interview and one person coughs while the other one is talking. If the two sources are pre-mixed (“merged mode”) into the same channel, then you will not be able to eliminate the cough without affecting the voice of the person talking – so it’s basically impossible. When you have the two sources in separate channels you can just mute the noisy channel for that moment in post. You can switch between the two modes by pressing both the dB button and the pairing button on the RX unit at the same time. 

One thing you should be aware of when using the split-channels mode recording into a smartphone: This only works with the digital input port of the phone (USB-C on Android, Lightning on iPhone/iPad). If you use a TRS-to-TRRS cable and feed it into the 3.5mm headphone jack (or a 3.5mm adapter, like the one for the iPhone), the signal gets merged, as there is just one contact left on the pin for mic input – only allowing mono. If you want to use the GO II’s split channels feature with an iPhone, there’s currently only one reliable solution: Rode’s SC15 USB-C to Lightning cable which is a separate purchase (around 25 Euros) unfortunately. With Android it’s less restrictive. You can purchase the equivalent SC16 USB-C to USB-C cable from Rode (around 15 Euros) but I tested it with a more generic USB-C to USB-C cable (included with my Samsung T5 SSD drive) and it worked just fine. So if you happen two have a USB-C to USB-C cable around, try this first before buying something new. You should also consider that you need a video editing software that lets you access both channels separately if you want to individually adjust them. On desktop, there are lots of options but on mobile devices, the only option is currently LumaFusion (I’m planning a dedicated blog post about this). 

If you don’t need the extra functionality of the “split mode” or the safety channel and are happy to use it with your device’s 3.5mm port (or a corresponding adapter), be aware that you will still need a TRS-to-TRRS adapter (cable) like Rode’s own SC4 or SC7 because the included one from Rode is TRS-to-TRS which works fine for regular cameras (DSLMs/DSLRs) but not with smartphones which have a TRRS headphone jack – well, if they still have one at all, that is. It may all look the same at first sight but the devil is in the detail, or in this case the connectors of the pin.

If you want to use the GO II with a camera or audio recorder that has XLR inputs, you will need a 3.5mm to XLR adapter like Rode’s own VXLR+ or VXLR Pro.

Along with the GO II, Rode released a desktop application called Rode Central which is available for free for Windows and macOS. It lets you activate and fine-tune additional features on the GO II when it’s connected to the computer. You can also access files from the onboard recording, a new feature I will talk about in a bit. A mobile app for Android and iOS is not yet available but apparently Rode is already working on it.

One brilliant new software feature is the ability to record a simultaneous -12dB safety track when in “merged mode”. It’s something Rode already implemented on the VideoMic NTG and it’s a lifesaver when you don’t know in advance how loud the sound source will be. If there’s a very loud moment in the main track and the audio clips, you can just use the safety track which at -12dB probably will not have clipped. The safety channel is however only available when recording in “merged mode” since it uses the second channel for the back-up. If you are using “split mode”, both channels are already filled and there’s no space for the safety track. It also means that if you are using the GO II with a smartphone, you will only be able to access the safety channel feature when using the digital input (USB-C or Lightning), not the 3.5mm headphone jack analogue input, because only then will you have two channels to record into at your disposal.

Another lifesaver is the new onboard recording capability which basically turns the two TX units into tiny standalone field recorders, thanks to their internal mic and internal storage. The internal storage is capable of recording up to 7 hours of uncompressed wav audio (the 7 hours also correspond with the battery life which probably isn’t a coincidence). This is very helpful when you run into a situation where the wireless connection is disturbed and the audio stream is either affected by interference noise or even drop-outs.

There are some further options you can adjust in the Rode Central app: You can now activate a more nuanced gain control pad for the output of the RX unit. On the original GO, you only had three different settings (low, medium, high), now you have a total of 11 (in 3db steps from -30db to 0db). You can also activate a reduced sensitivity for the input of the TX units when you know that you are going to record something very loud. Furthermore, you can enable a power saver mode that will dim the LEDs to preserve some additional battery life.

Other improvements over the original GO include a wider transmission range (200m line-of-sight vs. 70m) and better shielding from RF interference.

One thing that some people were hoping for in an updated version of the Wireless GO is the option to monitor the audio that goes into the receiver via a headphone output – sorry to say that didn’t happen but as long as you are using a camera or smartphone/smartphone app that gives you live audio monitoring, this shouldn’t be too big of a deal.

Aside from the wireless system itself the GO II comes with a TRS-to-TRS 3.5mm cable to connect it to regular cameras with a 3.5mm input, three USB-C to USB-A cables (for charging and connecting it to a desktop computer/laptop), three windshields, and a pouch. The pouch isn’t that great in my opinion, I would have prefered a more robust case but I guess it’s better than nothing at all. And as mentioned before: I would have loved to see a TRS-to-TRRS, USB-C to USB-C and/or USB-C to Lightning cable included to assure out-of-the-box compatibility with smartphones. Unlike some competitors, the kit doesn’t come with separate lavalier mics so if you don’t want to use the internal mics of the transmitters you will have to make an additional purchase unless you already have some. Rode offers the dedicated Lavalier GO for around 60 Euros. The price for the Wireless GO II is around 300 Euros. 

So is the Rode Wireless GO II perfect? Not quite, but it’s pretty darn close. It surely builds upon an already amazingly compact and versatile wireless audio system and adds some incredible new features so I can only recommend it for every mobile videomaker’s gear bag. If you want to compare it against a viable alternative, you could take a look at the Saramonic Blink 500 Pro B2 which is roughly the same price and comes with two lavalier microphones or the Hollyland Lark 150.

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂

#42 Camera2 API Update 2021 – Android Pro Videography & Filmmaking — 15. April 2021

#42 Camera2 API Update 2021 – Android Pro Videography & Filmmaking

I’ve already written about Camera2 API in two previous blog posts (#6 & #10) but a couple of years have passed since and I felt like taking another look at the topic now that we’re in 2021. 

Just in case you don’t have a clue what I’m talking about here: Camera2 API is a software component of Google’s mobile operating system Android (which basically runs on every smartphone today expect Apple’s iPhones) that enables 3rd party camera apps (camera apps other than the one that’s already on your phone) to access more advanced functionality/controls of the camera, for instance the setting of a precise shutter speed value for correct exposure. Android phone makers need to implement Camera2 API into their version of Android and not all do it fully. There are four different implementation levels: “Legacy”, “Limited”, “Full” and “Level 3”. “Legacy” basically means Camera2 API hasn’t been implemented at all and the phone uses the old, way more primitive Android Camera API, “Limited” signifies that some components of the Camera2 API have been implemented but not all, “Full” and “Level 3” indicate complete implementation in terms of video-related functionality. “Level 3” only has the additional benefit for photography that you can shoot in RAW format. Android 3rd party camera apps like Filmic Pro, Protake, mcpro24fps, ProShot, Footej Camera 2 or Open Camera can only unleash their full potential if the phone has adequate Camera2 API support, Filmic Pro doesn’t even let you install the app in the first place if the phone doesn’t have proper implementation. “adequate”/”proper” can already be “Limited” for certain phones but you can only be sure with “Full” and “Level 3” devices. With some other apps like Open Camera, Camera2 API is deactivated by default and you need to go into the settings to enable it to access things like shutter speed and ISO control.

How do you know what Camera2 API support level a phone has? If you already own the phone, you can use an app like Camera2 Probe to check but if you want to consider this before buying a new phone of course this isn’t possible. Luckily, the developer of Camera2 Probe has set up a crowd sourced list (users can provide the test results via the app which are automatically entered into the list) with Camera2 API support levels of a massive amount of different Android devices, currently over 3500! The list can be accessed here and it’s great that you even get to sort the list by different parameters like the phone brand or type a device name into a search bar.

It’s important to understand that there’s a Camera2 API support level for each camera on the phone. So there could be a different one for the rear camera than for the selfie camera. The support level also doesn’t say anything about how many of the phone’s camera have been made accessible to 3rd party apps. Auxiliary ultra wide-angle or telephoto lenses have become a common standard in many of today’s phones but not all phone makers allow 3rd party camera apps to access the auxiliary camera(s). So when we talk about the Camera2 API support level of a device, most of the time we are referring to its main rear camera. 

Camera2 API was introduced with Android version 5 aka “Lollipop” in 2014 and it took phone makers a bit of time to implement it into their devices so one could roughly say that only Android devices running at least Android 6 Marshmallow are actually in the position to have proper support. In the beginning, most phone makers only provided full Camera2 API support for their high-end flagship phones but over the last years, the feature has trickled down to the mid-range segment and now even to a considerable amount of entry-level devices (Nokia and Motorola are two companies that have been good with this if you’re on a tight budget).

I actually took the time to go through the Camera2 Probe list to provide some numbers on this development. Of course these are not 100% representative since not every single Android device on the planet has been included in the list but I think 3533 entries (as of 21 March 2021) make for a solid sample size.

Phone models running Android 6

Level 3: 0

Full: 30

Limited: 18

Legacy: 444

Full/Level 3 %: 6.1

———-

Phone models running Android 7

Level 3: 82

Full: 121

Limited: 113

Legacy: 559

Full/Level 3 %: 23.2

———-

Phone models running Android 8

Level 3: 147

Full: 131

Limited: 160

Legacy: 350

Full/Level 3 %: 35.3

———-

Phone models running Android 9

Level 3: 145

Full: 163

Limited: 139

Legacy: 69

Full/Level 3 %: 59.7

———-

Phone models running Android 10

Level 3: 319

Full: 199

Limited: 169

Legacy: 50

Full/Level 3 %: 70.3

———-

Phone models running Android 11

Level 3: 72

Full: 28

Limited: 8

Legacy: 2

Full/Level 3 %: 90.9

I think it’s pretty obvious that the implementation of proper Camera2 API support in Android devices has been taking massive steps forward with each iteration of the OS and a 100% coverage on new devices is just within reach – maybe the upcoming Android 12 can already accomplish this mission?

As always, if you have questions or comments, drop them here or hit me up on the Twitter @smartfilming. If you like this article, also consider subscribing to my free Telegram channel (t.me/smartfilming) to get notified about new blog posts and receive the monthly Ten Telegram Takeaways newsletter featuring a personal selection of interesting things that happened in the world of mobile video in the last four weeks.

For an overview of all my blog posts click here.

I am investing a lot of time and work in this blog and I’m even paying to keep it ad-free for an undistracted reading experience. If you find any of the content useful, please consider making a small donation via PayPal (click on the PayPal button below). It’s very much appreciated. Thank you! 🙂