Monday, April 12, 2010
Thursday, September 18, 2008
As a common Internet end user as well as software engineer, we have been come through some missing feature during Internet surfing which should be present at (any) Next generation web browser.
Generally today’s web browsers are act according to request-response. This is sufficient to provide service to end user. Whenever user will give request either by URL or action of any web element browser send request to server and render the corresponding data at end user browser. Problem of this method is that always transfer same repeated static data over the Internet for same client. For example you have request for www.google.com now after rendering page on your web browser you have closed your page/browser. Now again you have given same URL request. In this method at both time same data will be pass over the network.
Next Generation Firefox Off-Line Server (FOLS):
Here is Innovative Concept.
whenever end user request for a web page it will get stored in a buffer as a cache on storage device (for long time as per frequency of visiting same URL of this cache). For the next same request browser will check cache and if it is not available then create cache for this URL otherwise get existing cache and check for dynamic/updated data of server page. If server page will different than cache then only corresponding data get over the network and update cache then rendering complete cache on browser page.
Generally all dynamic data will be always update from corresponding server into cache at each next request for same URL. Here at each next request for URL, browser will check the corresponding cache checksum value with server cache/page checksum value. If this value is same then browser will render page from cache if not then server will send only updated/new data to browser. Here browser will update cache with corresponding data then render it. In case of internet dis-connectivity browser will simply render cache with indication of off-line data/page. This may be whole new web browser.
As per user URL request frequency, for each URL one cache will store at local storage device. Default storage space for cache can be maximize or minimize by user. A smart storage allocation algorithm will allocate cache space to each new URL. In case of cache storage place is overloading then as per lowest count of visit/request of URL corresponding cache will be delete for new URL cache. Browser will have indication to user at each page to understand whether current rendered page is similar to server page or its a old server page. If its a old server page then current server page and rendered page is similar to each other or not.
Mainly this method will helpful in case of static pages like knowledge websites, news websites etc. For example you have to read last visited page of wikipedia.org article, browser will simply render cache even if you don't have internet connection right now. In case of news website, for old news page simply render existing page along with dynamic data it could be either any advertise on page or comments for page.
Yes, this concept work only for static pages then still end user will get benefit as well as internet traffic also reduce in case of knowledge websites like wikipedia.org. But it could be work for dynamic pages very well.
Let me explain how.
For example here is sample of Page1.html cache
In case of logical/physical disconnectivity, end user have given request to page whether page is static or dynamic then simply render page as it is from browser cache with indication that this page is a just last state of (dummy) page. That means when user has accessed this page at last time it will be continue as it is for next time.
- Increase Server Efficiencies
- To reduce Network Traffic
- Off-line world Connectivity
(As an end user you do not have to pay to Internet provider for same data request.)
During development of each page developer should put some mechanism to check browser cache and corresponding server cache/page using either checksum method or other.