X
Tech

Security mistakes mobile devs make

Developers are making critical security mistakes when it comes to making mobile apps, and this isn't going to change unless someone is held liable for the mistakes.
Written by Michael Lee, Contributor

Developers are making critical security mistakes when it comes to making mobile apps, and this isn't going to change unless someone is held liable for the mistakes.

andmac1.jpg

(android vs macbookair image by LAI Ryanne, CC BY 2.0)

Speaking at the Open Web Application Security Project's 2012 Appsec Asia Pacific Conference last week, Jacob West, who works as the director of software security research for HP's enterprise security products division, said that developers are still coming to grips with developing for mobile platforms, and, in doing so, are making a number of critical security mistakes.

West said that Google and Apple's new features on the mobile platform, such as a read-only stack, built-in data-encryption facilities and advanced permission and privilege models for how applications communicate and interact with each other could be confusing for developers. Worse still, he said that developers are also falling into the trap of not changing the way they develop, even though the application communication models in iOS and Android are completely different.

Intents and purposes

While applications built for iOS are completely self-contained, Android applications, when designed correctly, will be able to use components from other parts of the phone.

"This is one of the big distinctions between iOS and Google Android today; the idea that iOS applications effectively don't communicate with each other, but, in the Android world, not only are applications permitted to communicate with each other, but encouraged to do so. Basically, development best practices say that many applications are going to be built as multiple components that share 'intents' or actions between one another in order to implement a multi-tier application effectively."

These "intents" could be the act of taking a photograph, or displaying certain information within an app or to another. One Android developer could create secure application A to take photos, while another developer creates application B to securely upload information. Although application B could include the capability to take photos, best practice dictates using application A through an "intent". It also allows permissions to be divided between the two applications, and to allow a user, who may not necessarily trust application A to connect to the internet, to use application B for that purpose.

"It seems perfectly reasonable, if you're a developer, to think, 'I want to take a picture; I should ask for the camera permission — the permission that restricts access to the hardware device,' and, in fact, that would be something you could do as you're coding this application, and it would work, and the user would probably give you that permission, but it's not the way you're meant to do things," West said.

"In the Android model, you shouldn't explicitly talk to the camera; you should, rather, talk to the camera application that's provided to you, and effectively send a message [an intent] saying 'I would like to take this picture'. [The camera app] will take the picture for you, and send data back to you with the actual image. That's the way you're supposed to do things."

Intents have several problems when not explicitly limited to within a particular application, however, and are accessed for the wrong reasons. These include spoofing intents to deliver false information or hijacking them to allow a rogue app to report to its owners. One such example that West provided involved the manipulation of intents for a public transit information application.

"Imagine you can inject malicious bus-timing information or bus stop location information to a user. Now ... not only can you screw up their day, because you caused them to wait at some bus stop where no bus is arriving, but you can also effectively control their physical location. You can make sure they're in a certain place in a certain time, which could lead to a real physical security problem, as well."

New platform, old mistakes

While developers could be forgiven for making these mistakes, given that the development environment is still relatively new, West said that it is still common to see mistakes from the desktop world being needlessly repeated, or good security practices from the desktop not being carried through.

"In the web application, the Facebook team gives users the option to send all traffic over HTTPS, so they obviously think in some cases, this is an important security feature to make available to their users. In the mobile application, there's no such option made available to users, so by choosing to use the mobile application, you are basically signing up to give up your privacy. You're signing up to send this data over an insecure channel," he said.

While irate at the social networking company over its apparent double standards, West is almost dumbfounded to still be discovering SQL injection vulnerabilities in other mobile applications.

"It's amazing to me that after, what, 25, 30 years, we're still making exactly the same mistake.

"I don't know what the reason for the repetition of these [SQL injection] mistakes is, but I do know that as we increase our use of the SQLite database, and increasingly we store important business or user data on [mobile devices], we need to worry about how that data is accessible to attackers."

In fact, if best practice is better followed, SQLite databases built in to Android should be used more to protect certain information that should be limited to single applications.

One such example would be the Amazon Kindle application, which stores ebooks in a folder on mobile devices' local storage.

"It stores all the books there, [and] we don't think about the books as being a super-sensitive sort of data, but other applications can modify them, so if there's a code-level vulnerability in the Kindle app, another application tweaking those files that Kindle's going to load could lead to some problem."

"But also, when the application is removed, all of the data that is written to that local storage persists beyond the removal of the application. So, if you were reading about Java programming, [it's] probably no big deal. But if you were reading about how to find your next great job, and it was on a device that someone at work had access to ... it could be kind of a problem for you from a privacy standpoint."

West's suggestion of keeping certain information in SQLite databases takes advantage of using database privileges to restrict access to certain applications, but he also lamented that as a whole, application privileges aren't working as they should be.

"The choice about whether we grant an application these permissions or these privileges is binary," West said, highlighting the fact that users are effectively left with an all-or-nothing approach to security.

"Users just want the application that they're trying to install. They just want to play Angry Birds. They're not really concerned with this long list of cryptic permissions the application is requesting, and I think this leaves the door open for inadvertently dangerous applications, and definitely for explicitly malicious applications to be installed to take advantage over this ... trust that users are placing in them."

No incentives for security

Although developers could troubleshoot their applications by testing to see whether removing the request for a permission breaks their application, West said that there is zero incentive to do so.

West told ZDNet Australia that combined with the pressure to release a product on multiple mobile platforms, and often through boutique firms that have little to no working knowledge of the back-end systems that applications connect to over the internet, businesses simply aren't thinking about whether the application model for iOS is relevant to Android.

Combined with users accepting or otherwise being desensitised to application permission requests, West said that developers often see no repercussions in neglecting security, and have few, if any, incentives to write secure code properly.

However, West is optimistic that things would change due to the fact that Google and Apple, among other players, would have something to lose from poor security.

"Let's say a breach occurs, and someone's information is stolen. Who are [users] going to hold accountable for that loss of information or that damage done to their identity? It might be the app developer, but I think it's just as likely to be the service provider or the people that they downloaded the app from," West said.

"Someone's going to be held accountable for security problems, [but] until users begin to [point the finger], then it's not going to be important to the providers because it's not costing them anything."

Editorial standards