64-bit iPhone

Since everyone seems to have an opinion about the iPhone 5s having a 64-bit processor, here's mine.

In the immediate future, this will make zero difference for consumers. Applications that ship today will continue to be compiled as 32-bit, so that they can run on all the other iOS devices as well. Since the iPhone 5s is likely to be faster than all those other iOS devices as well, there's little point right now optimizing an application for the iPhone 5s, everything that runs well enough on an iPhone 4s or iPhone 5 or iPhone 5c will run well enough on an iPhone 5s as well. Optimize for the low end, and the high end will take care if itself.

In the medium term, there will come a moment in time when all supported iOS devices have 64-bit support, with Apple dropping support for 32-bit devices. At that point, the iPhone 5s will be the oldest supported iOS devices, and the iPhone 5c won't be supported any more. Indirectly, that means that the iPhone 5s will have a longer useful life than the iPhone 5c, and since the difference between 32-bit and 64-bit is qualitative and not quantitative, it's possible that the useful life of an iPhone 5s will be more than one year longer than that of the iPhone 5c. Users who care about how long they can keep their phones before being forced to replace them because of obsolescence could consider the more expensive iPhone 5s over an iPhone 5c on this point alone, regardless of the other differences.

Finally, 64-bit support in the iPhone 5s is a great stepping stone for developers. Even though deploying 64-bit-only applications might not be practical for another few years, it's never too early to make sure that code compiles for 64-bit targets and passes all the unit tests, to run microbenchmarks, and maybe even for some dogfooding. In the long term, I can't rule out that Apple would remove 32-bit support entirely and would therefore mandate 64-bit applications, and it's probably easier to get ready on an ongoing basis than to go through a 64-bit fire drill many years down the road.
670
147
Khaled Osman's profile phototk HWANG's profile photoDérian Morgan's profile photoJunWook Kwak's profile photo
103 comments
 
"Optimize for the low end, and the high end will take care if itself."

You could have stopped right there, and this would have been a useful, informative post. Well said!
Rich S
+
3
2
3
2
 
I thought 64-bit meant absolutely nothing except that eventually iphones will be able to have 4GB+ memory.  Since this iphone likely has 2GB or less, it's more of a sign of the future than something useful at the moment.  But they waved their "first 64-bit phone" flag, so there's that.
 
spot on, basically they are starting the 64 bit transition now and in ~3 years time they will have all (supported) phones on 64 bit. Every other vendor will have a mix of low end 32 bit and 64 bit high end devices at that time.
 
+Rich Stone my desktop PC ran happily with 12GB of RAM and a 32 bit OS for years, so >4GB RAM is not a stopping issue
Rich S
+
7
6
7
6
 
You may have had 12GB in it, and it ran happily, but there was ~8GB that was never accessed by the OS.  
Rob M.
 
I'm less than familiar with the technical side of such things, but it's clearly creating headroom for the future. I think the majority of the disappointment on the intarwebz is coming from people expecting immediate gratification.
 
+Rich Stone No individual program accessed more than 4GB at once, but many people run multiple programs at once.
 
Very well said, JBQ - I totally agree. It is also clear that they had to move iOS to 64bit if they want it to stay relevant in the long term. People keep associating iOS with iPhone, when in fact the iPhone is among the least powerful hardware that runs iOS. The tablet and settopbox market will need 64bit much sooner than the phone market.
 
+Rich Stone well 64bit is a little ambiguous. 64bit could refer to the registers, databus or address bus. For registers/databus this means faster processing of big data. For address bus this means access to >4GB memory.
 
This whole post is going to make some people cry. But it  is brilliant.
 
+Rich Stone if you are running 32-bit applications on 64 hardware, you won't see much difference, and your absolutely correct about being able to process on more than 4GB, but for applications written/optimized for the newer architecture, and the processing capability are just a natural shift in mobile computing's progression, much like consumer workstations now.
 
Indeed, you don't need a 64-bit processor to use 4GB of RAM or more. Intel's Pentium Pro (which was clearly a 32-bit processor) was able to access 64GB of RAM. ARM's Cortex-A12 core (which is also a 32-bit core) can access 1TB of RAM.

64-bit pointers are necessary to access more than 2GB at a time from within a single application, and full 64-bit support is necessary when individual data structures are larger than 2GB.
 
Interesting read!

In order to solve the compatibility issue, Apple could allow developers to compile both for 32bit and 64bit and upload both files to the app store. 
The users would then automatically get the correct version according to their device.  
 
It this point 64bit is just marketing. Let's face it, it's not brought much more to the table. 
 
+Christian Göllner - Developers currently have no incentive to do that. iPhone 5s will run 32-bit applications just fine, and will run them better than any other iOS device anyway. Shipping 64-bit code would require a lot more testing for very little benefit.

Once a point is reached when building a 64-bit version has some tangible benefits over being 32-bit only, it's up to each developer to evaluate the cost/benefits of staying 32-bit-only, of going for both architectures, or of moving straight to 64-bit-only, but at the moment I am assuming that shipping 32-bit-only is the way to go.
 
Apple could be automatically compiling both 32 and 64 bit and benefiting from at least compiler optimizations to make 64-bit apps marginally faster. As long as they can sort out 100% compatibility (all apps compiled for 64-bit work), I don't see a downside.

If your hardware supports 64-bit, you'll get the 64-bit version, otherwise 32-bit.
 
32bit ought to be enough for anybody. :-D 
 
+Artem Russakovskii - I don't expect that this would be transparent. When compiling natively (C/C++/ObjC), it's far too easy to accidentally make assumptions about the underlying architecture. There's no substitute for (unit) testing, (integration) testing, (user) testing.

History is full of similar transitions that ended up not being transparent. On the PC side, protected mode, 32-bit, MMX, SSE, 64-bit. On the Mac side, PPC, x86, 64-bit. Android on x86 and MIPS. GNU/Linux on Alpha or ARM. Changes in instruction sets are never transparent, especially when they come with changes in the data model.

The way Apple still supports 32-bit apps on the iPhone 5s instead of mandating 64-bit apps is the only real-world way of sorting out "100% compatibility".
 
Thank you for the observation. I have nothing against more powerful hardware becoming more common place in the market, but how will the addition of 64 bit architecture impact the cost of this next generation of flagship devices? More to the point, is the cost to performance ratio worth it for an price conscious early adopter? Is the cost in production for 64 bit vs. 32 bit even significant? I realize this idea is relative to the consumer, but aside from a device having a longer life in terms of performance would you estimate that early adopters will fully reap the rewards of the hardware? Having "too much power" in terms of hardware is always hard to argue against. :)
I wonder if software will be able to keep up with more and more powerful hardware as production cost to hardware capability become less and less relevant. The future is bright for the manufactures, developers, and the consumers.
 
+Drew Bannister - I actually expect the hardware cost difference to be negligible. ALUs and register banks don't use much silicon space compared to caches or GPU cores, and silicon space is only a small part of the cost of a phone.

On its own, all else being equal, 64-bit rarely makes things faster, and it's likely to actually make most tasks slower for having to move more data around: there are very few applications where handling data types larger than 32-bit is a bottleneck. Differences can come from fundamental changes in the instruction set (e.g. x86 from 16-bit to 32-bit), but I don't think that's the case between 32-bit and 64-bit ARM since the 32-bit ARM instruction sets are actually solid.

The key, mostly, is that 64-bit will allow to handle large data sets more efficiently, once large enough memory sizes become common, and once it makes sense to architect applications to take advantage of those capabilities. We're not there yet in terms of user-visible results, but iPhone 5s is a step forward that will allow application developers to think ahead on a concrete example.
 
+Jean-Baptiste Quéru I assumed that the increase in expense would have to be minimal for the mass production of the device to to be viable. Also, I currently have little experience in memory management as I've been spoiled by the JVM. Looking forward to getting my hands dirty with lower level instruction soon. Thanks for the input. Hope all is well.
 
So basically apple is preparing for the future! What is Google (Android) doing to prepare for this?
 
do you think android developers will have some advantages over ios devs in term of "porting" apps to 64bits thanks to the dalvik vm "abstraction" or not? I expect simple/medium android apps to have no impact on code side
 
I believe there are more registers in Aarch64, which was also one of the boons of x86_64.  That's often a performance win in itself, though certainly not always.  Sometimes performance is helped by avoiding register spills, but other times it's hurt by the added memory pressure of passing around 64-bit pointers.
 
+Maddoggin chillin About the only things that would need preparation are likely to be changes to Dalvik (if any - it may already be 64-bit-clean, in fact knowing Google I'd be shocked if it weren't already), possibly bionic (same situation as Dalvik though - it's probably ready as it is) and to applications that use JNI to access native code.

For the most part, it would be applications that use JNI - but we're many years away from nearly all those apps needing to go to 64-bit code.  Look at the PC industry - many major apps are still 32-bit only, which limits an individual application to 4GB of memory, but the system can still have much more.  Heck, the majority of games, which need a pretty hefty amount of memory to store textures and such, are still 32-bit.  Many of the apps that benefit from operating in 64-bit mode are server-side applications (databases) and workstation-grade applications (scientific data processing, etc.)
 
Or they can have smart AppStore just like PlayStore that will auto determine which version (32 or 64) to send to the phone. 
 
Hmmm I kinda c wat ur telling...
I think ur rite..
But rite now 64-bit is too much 2 go for.. May b next year a iphone 5c owner can utilize it fully..
 
Can anyone elaborate on why the "difference between 32-bit and 64-bit is qualitative and not quantitative"? I'd love to understand that more, thanks.
 
Lol you know why...they saw that livewalpapers took more power than the regular wallpapers. 
 
+Tony Chan - Changes from e.g. 1GHz to 2GHz CPUs, or from 512MB to 1GB or RAM, or from 16GB to 32GB of flash storage are quantitative in nature, as seen from the point of view of the application developer: they make applications bigger or faster but don't directly affect the way the applications work.

From the point of view of software engineering, those only have a limited impact, and specifically there's no need to do the same amount of testing on each combination: for each parameter, it's OK to determine which value is the most constraining one and to then extrapolate to the other values with only limited testing.

In languages like C, C++ or ObjC that get compiled directly to CPU instructions, the differences between one instruction set and another one aren't transparent. The languages are designed such that it's possible (with some care) to write code that works on all instruction sets, but it's also possible to write code that doesn't work on all instruction sets, and in the real world it actually happens a lot. In that case, there's little substitute for doing deep testing on each instruction set, and it's dangerous to assume that applications written in C, C++ or ObjC that work well on one instruction set will work well on other instruction sets.

Going from 16GB to 32GB of flash are just cases where the phone has more or less of the same, but changing the instructions that the CPU understands is a fundamental difference and can't be handled as just "more bits".
 
"In the immediate future, this will make zero difference for consumers. Applications that ship today will continue to be compiled as 32-bit, so that they can run on all the other iOS devices as well."

... Yes, they will be compiled for 32-bit, but I expect many developers to compile for 64-bit too. The Mach-O binary format is a "fat" format that includes slices for multiple architectures. Indeed, there already were three pre-existing slice types on iOS: armv6, for the original iPhone and 3G, armv7 for the A4 processor and later, and armv7s for the A6 processor and later. (armv7s adds VFPv4.) iOS 7 now adds a fourth, armv8.

I expect that all built-in apps will run 64-bit on the iPhone 5S. Apple is no doubt updating their first party App Store apps to support 64-bit, and based on past examples of rapid developer uptake, I expect many other developers to build for ARMv8 too.

"In order to solve the compatibility issue, Apple could allow developers to compile both for 32bit and 64bit and upload both files to the app store. 
The users would then automatically get the correct version according to their device."

"Apple could be automatically compiling both 32 and 64 bit and benefiting from at least compiler optimizations to make 64-bit apps marginally faster."

That's essentially what happens now. You can test your 64-bit builds today using the 64-bit (x86-64) simulator to check for conversion problems, and the Xcode static analyzer is actually really quite good at pointing out many of these problems automatically. Since the binary contains multiple architectures, the loader automatically picks the right slice for the hardware.

This is how it works on OS X, too. Did you notice during the PowerPC to Intel transition that developers provided binaries that supported both architectures? Have you noticed that Mac apps run 64-bit on modern Intel processors, but will run 32-bit on the original Core Solo/Core Duo Intel Macs? This actually goes all the way back to OPENSTEP, which used to support 68k, SPARC, HPPA, and x86 in the same "quad FAT" binaries.
 
Why is everyone seemingly discounting the more than doubling the number of general purpose registers? There has to be some fill/spill impact of only 14 GPRs on 32-bit ARM. 64-bit integer arithmetic should result in some minor speedup. Improvements to NEON can't hurt. These improvements are all incremental, but they add up, 64-bit addressing aside.
 
Why not make the same application both 32 bit and 64bit, for specific phones?
 
+Vladimir Pantelic  Your desktop machine was x86, not ARM. I'm not sure if mobile ARM processors have PAE extensions, but it's not safe to assume that's the case.
 
+Travis Hayes - I'm not convinced that extra registers will help that much. If anything, the fact that Thumb is faster than plain ARM on most tasks shows that in most cases having more GP registers doesn't speed things up enough to justify the cost of the longer instructions. That's not to say that it never helps.

64-bit arithmetic is rarely a bottleneck IMO, except for very specialized applications.

SIMD improvements can help, but (quite by definition of SIMD) those aren't quite directly tied to the GP register size. Like with Thumb vs ARM, if the improvements come at the expense of longer instruction encodings, it's unclear to me that the improvements outweigh the drawbacks.
 
+Aleksandar Stefanović - Because it takes more engineering time, and specifically more testing. Having a 32-bit version today is probably not a negotiable option given that both the latest iPad and the iPhone 5c don't support 64-bit instructions, and the 32-bit version works fine on the iPhone 5s. Supporting a 64-bit version right now is pure extra cost, and I suspect that few engineering teams will find that this is the best way to spend their time.
 
+Nicholas Alston I did not claim that but as soon as somebody runs into the 4GB barrier on 32bit ARM he could do the same as on x86. 64bit is not needed in order to handle more than 4GB - though surely it makes things easier
 
I feel that newer tablets and other devices will very soon have 3GBs or more, pushing this. 
 
+Jean-Baptiste Quéru my understanding was that Thumb allowed limited access to high registers and Thumb-2 complete access via 32-bit instructions. Thus you (the compiler) would put frequently accessed values in low registers and less frequently accessed values in high registers to optimize fill/spill and I-cache pressure. At least 64-bit ARM keeps instructions to 32-bit.

I'm not familiar enough with iOS development to know how common 64-bit longs are in the standard APIs. I've got more of a Java background, where 64-bit longs are rather common and 64-bit register and memory ops are generally a help.
 
+Vladimir Pantelic - Yup. 64-bit isn't necessary to use 4GB of RAM or more, PAE takes care of that without difficulty.

Turning things around, there are good reasons to use 64-bit addressing even with less than 4GB of RAM, e.g. when memory-mapping large data sets.
 
+Travis Hayes - Indeed, Thumb vs ARM isn't exactly a case of comparing 8 vs 16 GP registers. For practical purposes it's essentially 8 vs 12 to 14 registers. I'm not familiar enough with Thumb2 to know what they did there.
 
We can laugh about it now, but in a couple of years we'll all be using 64-bit. Just like we're now all using, erm, NFC.
 
+Nick Dixon bazinga! Apple's native-only iOS app approach requires them to roll 64-bit out to devs early. This will be used to best effect on the next iPads; it was just (weird) timing that caused the iPhone 5S to be their first 64-bit device.

On Android, the Dalvik VM could make 64-bit almost an invisible transition. Pity it won't be, as the device makers will need to maintain marketing technical spec parity with Apple.
 
+Franklin Nwankwo - The Android world will be a lot less affected, since Dalvik and Renderscript already offer hardware-independence, and since Android already runs of ARM, IA and MIPS today. Developers using the NDK will need to be careful, though.
 
we already build NDK stuff for a variety of CPUs like ARM with and without NEON, so we will just add another target :)
 
+Rich Stone The 4GB RAM limit is only arbitrary for 32 bit operating systems. Windows did it but any other OS has always been able to handle 4GB+ of RAM even if it was 32 bit. 
 
+Travis Hayes , there's already rumours of Samsung going 64bit with the S5 next year.

Out of curiosity, since it looks like we'll be seeing 'true' octa-core mibile devices soon, are developers actually even making good use of multi-core at present to warrant more of them?
 
First of all in Xcode you can choose to support a fat binary (32/64). Just recompile and it's done. there is not much difference form the dev perspective as shipping armv7/armv7s. So apps will support 64 bit not in years but next month.

Secondly, a 64 bits bus size matter. Indeed when you transfer an image from the CPU to the GPU, you will benefits from this enlarged bus.FYI most custom graph that use core graphics is computed on the CPU and then later on transferred to the GPU.

Thirdly, the new modern instruction seem to be a big deal from people on the known.

I am sure that android will catch up soon but this time we have to admit that it's a significant enhancement of iOS.



 
+Martin Wong adding more cores is cheap and inevitable. It's a strategy of "if you build it, they will come". And if it doesn't pan out, it cost very little. I personally think shared-memory SMP is a 25+ year failed experiment. ;-P
 
almost all "32 bit" processors can access more then 12gb of ram (save for the oldest) the only reason windows 32 bit couldn't access more then 4gb had nothing to do with the proc or hell, even software, it was licensing! theres a patch you can get for 32bit windows (7,vista,xp) that enables access to more then 4GB! up to what your hardwares is capable of (i believe) the patch basically removes the licensing check on 32bit win..
 
Just want to point out that some people are working their ipads/phones harder than others and it's best not to generalize about use cases. 

For instance many musicians are running "audiobus" with separate sequencers, synthesizers, and effects all running simultaneously.

Still others are itching to run CAD programs. Personally, I would love to use Fusion360 on an iPad Pro with an A7X processor.
 
It's not hard to imagine that versions of Logic/Final Cut and other professional applications will soon arrive on iOS and that users will want to run multiple instances of plug-ins such as
http://www.waldorf-music.info/en/products/nave.html

To claim that these users would see no benefit from 64bit processing is misleading in my opinion.

What about the physics and AI of ambitious games which are still typically calculated on CPU's?

It may seem silly but people are already using apps like 
http://www.native-instruments.com/en/products/traktor/traktor-for-ios/traktor-dj/ and 
http://www.moogmusic.com/products/apps/animoog-iphone
+ many others for live performance and in some cases to earn an income.

I think people are not yet accustomed to the pace that mobile is moving at due to the large amounts of money being thrown at it and the fact that most of these problems have already been solved in the past.
So it's a question of why not make the transition now to minimize the pain later?

There is a shift from consumption to creation taking place faster than most anticipated in this space and I think Apple are simply being proactive so that they can meet the demands some of the more hardcore users are placing on these devices.
 
 
Something I haven't seen noted in many of these discussions is the other things taking up room in the address space which aren't main memory.  In the PC world, I know towards the end of the XP32 era we saw users with 3+ GB of RAM and a 512+ MB GPU or two finding out about the 32 bit limit the hard way.  I know some of this was specific to quirks of the PC architecture, but I imagine the same is in play to some extent in the mobile world.

We now have devices on the market and/or launching soon with 3GB of RAM.  If any of the same address space usage issues apply, even the next logical step of 4GB will mean that at least individual application usable memory will become restricted.  Of course as noted that silly hack PAE exists on a few modern 32 bit ARM cores, but I really don't see why anyone designing a 4+ GB device would think that to be a smart idea rather than using 64 bit addressing properly.  Or maybe I'm just a bit salty at Intel still for releasing Core 1 and Atom in 32 bit only forms, where without those two clogging up the works Windows 7 could have followed 2008R2 in to 64 bit exclusivity and I wouldn't be finding machines that need a reinstall to utilize a RAM upgrade still to this day.

As a side note, a question for the lower-level programmers here.  I'm more of a high-level guy, I do 99% of my programming in Python, PHP, or shell scripts in that order with very little time spent in C derivatives so I have no real concept of thinking about the actual memory underlying what I'm doing.  Why is it still apparently so hard to get 32/64 bit right?  Why does this sometimes vary by platform?  Mozilla famously gave up on Win64 Firefox for a time while I was happily using the exact same code as a 64 bit application on Linux.  Obviously sometimes libraries are the limit, as Mac developers who had lagged in switching off Carbon found a few years back, but that's not always the case.
 
+Travis Hayes - that's the problem, adding cores and 64bit is cheaper and inevitable while other things are overlooked.
 
Couldn't the app store keep both compiled copies and when you download, you get whichever is applicable for your device? Seems fairly trivial.
 
+Andy Christianson it seems possible to split fat Mach-O binaries into thin binaries for a specific architecture. Does anyone know if Xcode or the app store publishing procedure already does this? It just seems inefficient to download and install fat binaries all the way to the devices.
 
So like a PC and windows xp when 64-bit was hitting off not to well. All will be fare game regardless and will take a few generations/upgrades in hardware and software till 64-bit becomes a norm. This is another way for apple to shoot themselves in the foot because people are still buying in on the iPhone 4s. I see a repeat in history here. Somehow but I think it is possible that this repeats will pass. Seeings this is a hardware advancement and not really a huge software advancement. Personally iPhone will still be a low quality device for the simplistic.
 
+Ryan Dean - Eventually, the type of applications that you mention will be the ones that'll benefit the most from 64-bit environments, and I anticipate that the developers of such applications are now happily waiting for their iPhone 5s to start porting to 64-bit. For now, the need to also run on 32-bit devices most probably means that they'll continue shipping purely that way in my opinion.
 
+Sean Harlow - A big part of the difficulty is that many applications use plug-ins, and have poorly designed plugin interfaces that require to update all the plugins at the same time. When a browser with a poor plugin interface needs to be updated to 64-bit, it also needs its media codecs to become 64-bit, its flash implementation, its pdf implementation, etc...

The other elephant in the room is that writing portable code in C/C++/ObjC is harder than it seems (or at least that it's really easy to make mistakes that hurt portability). It takes some discipline to think about the size of each variable and about the way those sizes are likely to change as the address space itself grows.
 
+Vincent Bellet - Note that many ARM CPUs have already have 64-bit buses already, and that with memory caches in the middle most memory access are made by whole cache lines anyway, which are longer than 64-bit (they seem to be 512-bit on Cortex A15)
 
Thank you for the reply Mr. Queru
 
+Jean-Baptiste Quéru yeah, but there is difference between being few years away because you thought it was either 32 or 64 and having some code to adapt. 
 
Well they are using Mach-O binaries, so build processes could go to some length to optimize. That would make the "Optimizing System with Installed Software" make real sense. But will they (the app builders) optimize? Aside from compilers using default "intrinsic" optimizations, probably not. System-level software will likely only have some optimizations, whenever they (the Apple engineers) get around to it. All that being said, there still is a rather large number more of transistors to flip (higher power requirements, shorter battery life), and lots of wasted stack and heap space (think x86_64 vs x32). I don't mind being 'stuck' with Thumb2 on Android and others for now ;-)
 
As far as "speed" goes - low power SDRAM still runs at what - 400MHz? 800MHz? There is a reason that ARMs don't drive the bus at 2GHz. Higher memory bus frequencies will drain the battery faster. Memory bus frequency / battery life is still the bottom line. Using wider registers and having more bits of accuracy is really addressing a non-issue, although more registers will address the memory bottleneck a bit.
 
The next generation of Android devices will probably be mostly 64-bit devices. If Samsung pushes 64-bit, then the transition will probably happen much more quickly than 3 years because there will be such a large number of 64-bit devices out there (Samsung's next phone WILL sell millions, just like Apple's new devices will). This will be encouraging to developers since it will provide a real reason to start compiling in 64-bit rather than simply providing a niche market for the same.
 
The iphone 5 is a good phone but you need to see the galsey s 4 it is the cooiest phone.
 
Just curious but could the 64 bit iPhone be powerful enough to somehow pull off that Ubuntu edge feature of docking into a desktop?
 
+Jean-Baptiste Quéru although what you said makes sense and could agree with it.. My feelings about it are kinda from left field.

Apple is losing market share to android.. can't really argue with that, so how do they claim that they are still growing... by growing the amount of apps.

Currently on Android, 1 app can be for many different phone sizes tablets etc.  Its all built in.  On apple, its a separate 1 for phones and Tablets.. ie. 2 apps.

Add 64bit apps into the mix.. its 4 apps.. Wow, what growth from 2 to 4.

And that I believe is their strategy.
 
I care about android much more than IPhone, so I hope you can share your opinion about how this trend will affect android.
 
I don't work with Android other than as a "fan" who does development for fun occasionally on the side, and an Android user.  However, I've read that Samsung (and probably others) have already been working on 64-bit Android devices, so this isn't all that revolutionary.  Also, my guess would be that something is built into DAVLIK similar to what we have on 64-bit Windows -- you can run 32-bit apps, even if it's just an emulated 32-bit "host" built in to it behind the scenes.
 
I would think given that Thumb seems to be limited/depreciated in ARMv8 that Dalvik would just need an eventual port in the code generator. And that depending on how it's using it, not even that per se given the memory management. But I ain't fun-spelunking that code this week.

In Apple space, I'd also think to some extent iOS would be a minor issue as programmers should likely be used to 64-bit friendly code. At least, as far as not trying to cram 64-bits of water in a  16-bit short pail.
 
+Jean-Baptiste Quéru did you see Apple presentation?  From your comments, I wouldn't have guessed that you did.  I would agree that most apps won't benefit, but Apple made a point of bringing the Infinity Blade team on stage and the Infinity Blade developers were raving about how much more they could do on the 5S.  It was extremely impressive...and those games / apps will be immediately available to consumers or in the near future once game developers update etc.

Additionally, the Infinity Blade team said it took only 1 developer approx 2 hours to convert from 32 to 64 bit...which quite frankly is awesome.

In all honesty, I would expect Apple's solution for developers to be practical and well thought out and easy to use which would correlate strongly w/ the Infinity blade team's comments.  I haven't done much with XCode 5 and iOS 7 yet other than testing the update of my apps on iOS 7 beta, but there is a simulator target for x64 and x32 (from the same XCode project)

I agree with you wholeheartedly that this is good for developers overall and raises the bar for the entire mobile industry.

disclaimer:  I develop for both Android and iOS, not a fanboy
 
+Jean-Baptiste Quéru good post.  While you make many valid points, if history is any guide there is one point on which I think you missed the mark: the speed with which app developers will adopt 64-bit on iOS.  I'd bet that what determines when Apple flips the switch to 64-bit only will be driven by when Apple sells it's last 32-bit iOS device (just guessing: the end of 2015 at the latest), not waiting for apps to support 64-bit.  Apple will likely require new app submissions to include 64-bit support long before that even happens.  I'll be surprised if most of the major iOS-only or iOS-and-then-port devs haven't announced full 64-bit support completed and shipping by the end of the year.  Cross-platform devs will likely take longer for obvious reasons.

On the Android side, I'd expect the transition (whenever it begins... this fall would be nice :-) to be a lot less work for devs not doing NDK work due to the abstraction layer as you mention, and yet still take much longer given the slow deployment rate for new Android releases.  As someone developing on Android, I wish that weren't the case... but it is what it is.
 
More than likely, by the time 64 bit apps are the norm, the 5s won't be getting the latest OS update needed for a lot of the apps anyway. If it's 64 bit but doesn't get iOS 10 which is needed to run the latest versions of most apps, the point is moot.
 
5S would have to load and keep both 32 and 64 bit libraries. I'm wondering what impact will it have.
 
+Rich Stone >4Gb memory is just ONE of many things made possible by 64-bit architecture; the most important thing is a new instruction set, allowing more atomic instructions and allowing more data to be carried over for each native instruction
I don't understand why so many people jumped into conclusion that 64-bit without 4Gb memory is useless, we're talking about multiple aspects here, not just one

but yes, 64-bit has no benefit when very few third party apps are built for 64-bit, so we'll have to wait for 1-2 years before we start seeing decent amount of apps compiled for 64-bit
 
JBQ, it's obvious you don't know anything about AppStore deployment. Third party developers create fat binaries, including both v7 AND v8 version which means apps built with the updated XCODE run in 64bit mode by default.

What you are saying about the applications remaining 32bit is valid for Android though; I wouldn't write any separate 64bit version for Android even if there were devices with 64bit Android.

You are mentioning that Apple will remove 32bit support in near future. I completely agree with you on this, but you are missing a graving point : Apple CAN and WILL remove 32bit support from its SoC PRIOR TO removing it from iOS.

It might happen even next year! (At least with the low cost version)
People hardly use apps not updated for one whole year, and by the time iPhone6 launches, virtually all competitive apps are already 64bit ready.
So why not save tons of money and shrink the die size? - A practice the Android Alliance can't mimic. That's horrifying.

And people also should be aware that 64bit on ARM means much more than on x86 due to the architectural differences. Every low level programmer knows that. I could write a whole report on this subject.
 
+Jake Lee - The issue isn't about store deployment of fat binaries. Those have existed for a long time and are very well understood. The issue is about the engineering overhead of making apps work on 64-bit. There are also lots of historical examples about that, with MacOS being especially rich there, where things rarely go as smoothly as planned (I had a Mac during the PPC switch, during the x86 switch, and during the 64-bit switch).
 
The move to 64-bit is actually much more painful on ARM than on x86 due to the fact that 64-bit and 32-bit don't interwork on ARM which means switching between both states can only occur in an exception level.
However, Apple has iron-fist control over its own AppStore, and the life cycle of mobile apps are so much shorter than PC applications, therefore it's actually hardly any problem for Apple to complete the move to 64-bit in a very short time : Currently, iOS4.3 is the minimum requirement for AppStore deployments. Apple has been steadily raised this required version number without any major problems. A research in July 2013 showed 93% of iPhones were running iOS6 or above. This number surely is increased by a good margin since the launch of iOS7 and 5c/5s.

Apple can and will raise this number to 5 very soon and to 6 not much later. Nothing can prevent Apple from doing that. Some app devs may complain, but it's no big deal considering the extremely low level of fragmentation.
Apps released targeting iOS6 or higher are automatically 64-bit ones on 5s or above. A few months after this change, Apple's move is completed.
If this change happens around summer 2014, odds are good that the iPhone6 features a 64-bit-only SoC that will lay havoc on Android OEMs with its cost/design/power efficiency.

As an experienced low-level-programming/optimization specialist I could also tell you how wrong it is to compare ARM's move to 64-bit to that of x86 to start with, but it would be too long and technical to be written in a comment. In short : the benefit of 64-bit ARM is huge.

On the other hand, Android will have hard time moving to 64-bit due to the fragmentation and more importantly, non-interworking : If you run a single 32-bit app with 64-bit ones on 64-bit Android, it will drain battery like crazy in addition to braking the whole system.

I'm really curious about how the Google engineers will solve this. I'm ready to be excited.
 
+Jake Lee probably by saying 'Don't do that'. The norm for android apps are architecture independent compile time. Those using the NDK rather than the SDK will have to update their builds. Just like they would do if supporting MIPS or x86 devices. Most Android apps that aren't independent of the processor architecture at compile time, are likely games and media players. Or doing something most users won't know how to use (like rooted SSH tunnel or CIFS mounter)
 
+Terry Poulin 
Isn't it funny to see all those performance irrelevant apps getting 64-bit accelerated out of the box while apps crying for more computing power requiring additional efforts? 

And far far more importantly, if you HAVE TO target Android 5 or higher with your app for it being 64-bit enhanced, would you do that?

No way.

There won't be any meaningful number of 64-bit apps before the market share of Android 5 or higher hits around 70~80% mark.

How long will it take? Five years at least.

The tech-experts proven to be wrong with discounting Apple's jump to 64-bit are completely right all of a sudden.
All their points are fully valid with Android.

And SDK based apps relying on some third party NDK based libraries are even more problematic.
For this kind of apps, Google should also provide a separate 32-bit VM/JIT to prevent these from being laggy and power hungry due to the states change overhead.

Google must have done something terribly wrong.
Chaos awaits.
 
+Jake Lee - Android already supports 4 different instruction sets, and adding a 5th one is just a natural incremental evolution.

As for apps that can run as both 32-bit and 64-bit, it wouldn't break any kind of compatibility to allow such apps to specify their preference in their manifest, in a way that'd still work with all previous versions of Android.
 
What you are saying is theoretically very well possible, absolutely no doubt in that, but I'd say it all depends on Google.

If an NDK app targeting 2.2 is supposed to run in 64-bit on Android 5, Google must have done all the backtracking up to the point with their most recent OS/NDK, dealing with all the deprecated system routines.
It might be quite inappropriate talking about Google in front of you, but I'd say that's not the Google I know.

You might have some confidential insider information regarding this, but I still doubt Google cares much looking back this far.

I hope I'm wrong on this since I think 64-bit computing will make quite a difference on mobile devices, unlike on PCs.

regards
 
+Jake Lee those still targeting for Froyo/2.2 and not compiling against current stuff, deserve whatever problems they get. Next quarter that will almost be like targeting pre-ANSI C on a PC.

I build things against the most recent Android versions that I can test and roll down to the lowest version I can support for what is required at runtime.

Right now, that's mostly OpenGL ES 2.0 in the library side because of the nature of one of my main projects. For general apps work that isn't developer facing, next year, I'm likely going to be saying 4.1 for anything I do => or I assume if I upload it, they're not buying the app anyway. Because they obviously haven't bought a phone in 3 or 4 years and I'm not going out of my way to test on an old Desire Z or original Moto DROID just for people that ain't contributing patches or money.
Add a comment...