One taking a picture of a building often faces the problem of tilting the camera back, which causes converging verticals, or of keeping it level and wasting much of the angle of view and resolution on unwanted foreground. The convergence can often be stretched back out in a graphics program later but this is tedious and inaccurate to do by hand because it should involve not only pulling the verticals apart at the top but stretching the picture vertically (at a rate that should increase with height) to compensate for what was reduced scale in both directions as the top of the building tapered away from the lens.
A smartphone's accelerometers could be used to sense the angle at which it is held when taking a picture to allow the picture to automatically be stretched to the exact proportions it would have if the camera were held vertically, while showing the general area it shows tilted back--not unwanted foreground. Real-time preview or at least masking of the areas that would be cut off in the stretching process would be nice, but immediate processing would be OK.
This would basically replace the clumsy-and-expensive-equipment-requiring photographic technique of "lens shift", albeit with the poorer but decent and getting better photo quality of a smartphone camera. Perhaps one day there will be a fancy camera with a built-in open-standards computer for image processing.
The related clumsy-and-expensive-equipment-requiring technique of "lens tilt" can be used for increased depth of field, which is not necessary with a smartphone's small sensor's and lens's already-great depth of field, or for restricting focus to a narrow band, which can be approximated with selective software blurring.