My colleague Bill Tozier has been doing a bunch of digitization work for Distributed Proofreaders, and as a part of that we had some discussions of how you might create some infrastructure to let you build hyperlinks to individual scans of individual pages in particular books.
The observed problem is that if you have a book, and have scanned in a page of that book, there is no easy way to predict what the URL would be to link to that page. Every system (Amazon, Google Books, etc) has its own way of doing things, and none of them have any sort of predictable REST style URL structure for deep linking.
I can imagine a system which would have page names like
e.g. with a URL parser that referenced a naming system, and within that system had a regular structure for naming the elements, and the system itself allowed either for unique copies (like librarything) or possibly non-unique copies (ISBNs). The name would also encapsulate the format which the item would be returned in, either as an image or as a data structure which would have pointers to (something).
This system wouldn't need to have any data in it - it could just resolve or look things up (as best it could).