[Openmcl-devel] Apple To Require Sandboxing For Mac App Store Apps - Slashdot
Gary Byers
gb at clozure.com
Mon Nov 7 13:19:25 PST 2011
My understanding is that users downloading a sandboxed app from the App Store
get to see the app's entitlements before doing so.
If one of the goals of sandboxing (and the App Store) is to make it easier
and safer for non-technical users (e.g., your grandmother) to download and
install applications, it's not clear how that fits. (I have mental images
of a kindly old lady thinking "well, I'm not really sure why this program
that's supposed to help me manage my cookie recipes needs to talk to an IRC
server in Minsk, but ..." before downloading.)
Android uses a similar scheme. In Android, the "entitlements"
(whatever Android calls them) are so generic that it's very difficult
to evaluate them: you're told that "Passive-Agressive Birds" needs to
access the internet, but you're not told why. (So it can check for
updates ? Send location and demographic information to Google or
Facebook ? Check the server in Minsk to see if there's any new malware ?)
If I saw something that said "this application is remarkably casual
about checking data received over the internet and therefore likely
vulnerable to lots of remote exploits", I probably wouldn't download
it. I don't expect to see that, and don't know how meaningful detailed
info like that would be to the App Store's supposed target audience.
I don't know enough about how MacOS sandboxing is implemented to have
a real sense of what its strengths or weeknesses are. If a sandboxed
application ran in a chrooted environment (and that was enforced by
the OS) and if it ran with reduced privileges, that'd be a little different
than if it ran "as if" in a chrooted environment and as if it had reduced
privileges. (As others have pointed out, there's a difference between what
a program's code does and what an exploit's payload running in a particular
process context does.)
When a program's submitted to the App Store right now
(pre-sandboxing), it's probably fairly difficult and time-consuming
(and expensive, from Apple's point of view) to audit its intended
behavior. If its intended behavior has to comply with a set of rules
and is advertised by some mechanism (entitlements), that auditing process
becomes simpler (and cheaper.) I can understand how that could have value
to Apple, even if sandboxing does little to defend the user against malware
(I don't really know whether it does or not) and even though there are a lot
of applications that I'd be interested in using or writing that don't really
fit into the proposed sandboxing model and may not fit into the final model.
On Mon, 7 Nov 2011, Tim Bradshaw wrote:
> On 7 Nov 2011, at 15:53, Tom Emerson <tremerson at gmail.com> wrote:
>
>> I'm a bit confused by the question: the whole point of the sandbox is to minimize the detrimental impact of a rogue third-party application on a user's computer. Presumably there is an implicit trust between Apple and its users (i.e., I trust that Apple-authored software is not going to install a virus or otherwise attempt to steal information) that does not exist with third parties.
>>
>
> That's one purpose. Another, and probably more common, purpose is to handle the case where a well-meaning but not bug-free application gets handed something toxic which causes it, in turn, to do something bad. That, of course, is a very common problem indeed, and probably what is driving sandboxing. I may trust Apple to be well-meaning: I certainly don't trust their code to be bug-free, any more than I trust anyone's.
>
> As I said before: what they need is a rating / classification system which will let you understand what privileges an application will be given.
> _______________________________________________
> Openmcl-devel mailing list
> Openmcl-devel at clozure.com
> http://clozure.com/mailman/listinfo/openmcl-devel
>
>
More information about the Openmcl-devel
mailing list