If it takes 3 seconds to process a single RAW image on a PC then this performance is achieved in a physical implementation where the application has direct access to all CPU cores and can utilize the computing power fully. Most servers offered online (even root servers) are running virtualized in a shared host, sharing resources with other virtual servers. (I'm not referring to Apache vhosts here, but VMware or other forms of virtualization.) On such systems the CPU power is limited to avoid that a single server uses up all resources and all other servers are unusable. Although you might "see" 4 or 8 CPUs in your system reports, their computing power is not fully available to you. This will slow down all your processing further.
I have to agree with others, although technically maybe interesting I see no real value or need for such an application.
Thank you for addressing the physical and technical aspects of my application. The problem could be solved with the help of 'cloud' computing with specific hardware service level agreements as it can be very scalable and providers charge you for what you use. I'm still trying to explore how to proceed and what I can do for my project based on your responses.