- Integrated Development Environment (IDE) : It’s a tool that helps us to create our product ecosystem. If you are a windows operating system fan then Microsoft Visual Studio.NET would be the good choice. IDE helps you to work on the product seamlessly. You can navigate to the files, modify the files, build the product, get the binaries, deploy to the application server like IIS, and many more. I am not the linux fan hence I do not know more about linux development tools.
- IDE Extensions : These are the small plugins that enhances the features of the IDE. These are written to automate minor and basic activities that the developer of the product uses in his daily lives. The extensions helps us to work on the product while reducing the chances of the errors and failures. These extensions can generate a lot of code for you with just a mouse click.
- Servers : I use the build server, application server, and database server while working on the product. These servers helps me to distribute my application load across various machine boundaries thereby giving me the flexibility to change/upgrade the product. Sometimes I use FTP server to upload my product deployment binaries.
- Cloud Storage : I use the cloud storage for storing my content of the product. The content, essentially, means that I am storing all the static resources over cloud. It is more economical for me to go with this option because there are every chances that my application may grow over the period of time and migrating to cloud looks more convenient for me to maintain my product 24x7
- Profilers : I prefer the profilers to profile my product. It gives us the ability to find if there are any bottlenecks or memory overflow while running the applications. These are very matured tools and it is a great feeling working with them. It gives us the advantage of finding the bottlenecks at the very early stage and can help us to deploy high quality production build.
- Emulators : These are used to simulate the environment that is separate from the standard computer device. e.g. it could be a tablet or phone, it could be a ATM kiosk, or it could be as complex as a flight cockpit.
- Messenger & Mail client : These are helper tools, though not mandatory, to stay in touch with your office friends!
Codes; /* & Notes */
A fractional distillation of the technology
Tuesday, January 20, 2015
Friday, December 5, 2014
Sunday, March 17, 2013
- Cloud Hosting: This feature enables us to create the
instance of our website. When we
want to host our website on cloud we would like to select this option. The selection of this option depends on
the web-server you may like to use. e.g.: for .NET based applications you
may like to use windows or IIS, for PHP/Java you may like to go for
Apache. The deployment engineer
needs to make an appropriate choice as to which platform the application
is targeted. The engineer then creates the instance of the website and
creates the published copy of the website.
The published copy of the website contains binaries (dlls) and the
views (.aspx) pages. The Rackspace
recommends to ftp upload the contents of the published folder into the specific
folder named 'Content'. Once the
content folder is stuffed with the binaries it takes sometime for the
cloud engine to reflect the changes.
Meanwhile, if you are an IIS user then you must restart the
application pool by using 'Rebuild Application' option available in
Rackspace Control Panel Console.
The 'Rebuild Application' option gives user the ability to force
restarting of the application pool so that new binaries can be loaded in
the process and changes gets reflected on the website soon.
- Cloud Files: This feature enables the storage of the
files over cloud. The Rackspace
allows logical grouping of the files which are often known as Containers
(Amazon S3 call it as Buckets). The
containers are the virtualization over the physical hard drives meaning
the single container can spread across multiple hard disk drives. It also serves as the unit of isolation
meaning the contents of one container cannot be overwritten/overridden by
the contents of the other container.
These are independent of one another. The cloud vendors also provides the API
key to programmatically manipulate the files. The developer or a software
writer can upload, download, view, delete the files programmatically using
the API key. The programmatic access follows all the basic rules for cloud
access including the concept of the containers (and buckets in Amazon
S3). The cloud works on the
principle of CDN (Content Distribution Network). This enables the cloud
engine to generate a unique path for all the files that are uploaded to
the cloud. The file can be accessed
from anywhere in the world over the internet using the "cdn"
path.
- Cloud Servers: This feature enables the user to create the instance of the servers in the Cloud. Conventionally, the vendors have servers in their own premises. This setup may need an additional efforts to maintain the server. The organization may need to design or draft the backup plans, in case, the server gets crashed. The Rackspace frees us from all these worries. This facility enables the user to create the instances of the server on the cloud and use them round the clock. The overheads are taken care by Rackspace. Rackspace provides several options to create the instance of the server. It can create the server with any platform (all flavours and versions of Windows and Linux). There are several pre-defined images of the servers available in Control Console that facilitates easy and faster creation of the new instance of the servers
- Cloud Load-balancers: This feature enables the user to create load balancers for their servers. I need to explore this feature in more details and will update the required information in this post once I get enough competency in this section.
- Computing Cycles: The cloud works entirely on the concept of Virtual Machines. These VM's uses several CPU (Processors) that can spread across different machines. Hence, there is a need to Virtualize the concept of amount of CPU usage. This is known as Computing Cycles (cc). It is defined as the amount of total CPU cycles used by the user in performing the administrative task from the Control Panel of Cloud. It is the sum of all the CPU cycles from all the available CPUs.
- Storage: The cloud uses the concept of Virtual Storage to identify the amount of storage the specific deployment of the application uses. It also includes the amount of storage used by the Cloud Files.
- Server Instances: The total number of servers created by the user and platform information of each server
Tuesday, February 5, 2013
Like others, I also wanted to reap the fruits of my knowledge almost instantly but it never happened. It might be because the knowledge that I had was superficial or inaccurate initially. Gradually when I progressed more on the path of life, I found the knowledge which I gained initially helped me to solve many questions/puzzles of my life. The only thing that I missed was to polish it. I found it very interesting and started acquiring it more and more. Really we never know when we will get chance to apply our knowledge that we acquired in the past. The best way to retain this knowledge is to keep practicing it till it gets a comfortable seat in your mind. The knowledge in any form should be welcome without analyzing the degree of success we can achieve by using it. Sometimes we tend to put lot of efforts in gaining the knowledge but we get frustrated if we are not able to reap the fruits instantly. Never mind! The knowledge never goes in vain. It will be used somewhere at some point of time in rest of your life and can fetch you better fruits. We all know the stories of two woodcutters who use to earn their livings by cutting the woods in the forest. The one of them was yielding better results because he use to sharpen his axe everyday. It was the knowledge of how the axe works and what is needed to make it work better helped the woodcutter to get better results.
The next question that comes to my mind is "how do I keep myself interested in gaining the knowledge"? The answer to this question is to keep craving to polish your knowledge. It is quite true that there is always a scope to polish your knowledge in whatever form you have it. Studies have proven that a person may tend to stop acquiring knowledge when he gets bored. Similarly organizations may not succeed in its goals and vision if its employees complain that they are getting bored. The best way to keep ourselves motivated is to ask ourselves a question: "what did I learn today?" The answers to this question will surely make ourselves more active and our attitude towards the journey of acquiring knowledge will change drastically. Sometimes we get bored from our routine life because we fail to notice the interesting aspect of it. When I wake up and realize it is the same Monday that comes every week and the same status update calls in the evening, I tend to loose the craving of reaching the office in enthusiasm. The situation can tend to be different when I am successful in realizing that I am going to use new technology or a new algorithm today to solve the problem at hand. This may also be explained in terms of science. When we are excited our body tends to discharge more adrenalin which gives us a natural thrust in doing something creative.
Sunday, January 6, 2013
The REST based service can seamlessly integrate any client with the server easily. The client could be a .NET client or the smart phone application. For this post, let’s discuss how the smart phone client can integrate with the REST based endpoint of the WCF service. There are various platforms available to program the smart phone applications. The most widely amongst them are iOS, Andriod and Windows Phone. These programming platforms provide the wide range of libraries to connect to the web service endpoints. The typical application on the phone sends the request to the service and receives the data from the server in the serialized format (JSON & XML) known as response packet. The phone application deserializes the JSON/XML packet and displays the data on the application. The entire scenario is a very good example of disconnected application that involves client and server. The phone application can also help to perform CRUD operations using GET/POST verbs on REST-based endpoint. The typical scenario can be discussed as follows:
Saturday, July 14, 2012
Selecting a high-end software development computer
Computers can be tagged as a multipurpose device. Nowadays it is being used in all the profession. Also, there have been lots of changes in the way computers are built in the present era. Their overall architecture evolves very rapidly. Someone from non-IT background asked me about the ideal configuration of the computer that he can own and my answer to that was “it depends”. The requirements and the usage scenario matters a lot while selecting a computer or a notebook. As an example, the journalist may need a computer to maintain the track of the notes and upcoming unpublished articles. The basic configuration is enough to satisfy his needs. On other hand, the software developer may need an advanced configuration because he is expected to run a whole lot of his programming software and some in background as well. Once we narrow down on our usage scenarios it becomes very easy to select one from the authorized reseller. Let’s discuss each and every component of the computer and decide upon the ideal configuration. We will also discuss some of the key terminologies used by hardware vendors for marketing their products.
Microprocessor (Category: Performance): As we all know it is the heart of the computer. It synchronizes the operations of all the hardware devices present on the mainboard. For a developer, the ideal processor configuration would be 3 GHz (minimum) because the development tools are getting feature rich day by day. These tools consume lot of CPU cycles to provide the smooth operation to its users. The processor L3 cache also contributes to the speed to some extent. The number of cores is equally important along with the individual core speed. This is due to lot of programs being run at the same time e.g. lots of windows services runs in background while computer is in use, anti-virus real time scanning, outlook for emails, some music, developer’s tools, etc. These softwares perform better when they are allocated a separate core to run them. Their stability also increases a lot if ran on separate core. The processor with thread level parallelism performs even better. This essentially means that n number of threads can run on the same CPU core and hence can utilize the core speed to the fullest. If the development scenario requires virtualization then the virtualization enabled processor can be selected. These processors are programmed to provide better performance, reliability and security while working on the virtual machines. The processors are categorized as 32-bit and 64-bit. The 64-bit processors are required in case the programs needs more physical memory to operate e.g. opening 3 GB log files from production environment or attaching a large files in the email or using lots of extension plugins to extend visual studio IDE, etc.
This lets us to put the configuration for the ideal development machine is: 3GHz - x64 – 4 cores/8 threads – VT
Physical memory – RAM (Category: Performance): The development machine needs more RAM as compared to a non-development machine. The ideal development machine runs lots of programs including virtual machines hence they consume more RAM. I would put in 8GB RAM as bare minimum requirement to run a high end development computer. The RAM speed – FSB (front side bus) is equally important for the RAM to operate at the speed of processor. E.g. 800 MHz RAM will waste more CPU core cycles as compared to 1066, 1333, 1600, 1866, 2166 MHz RAMs. The metrics to identify this efficiency is known as bus-core ratio. This metrics is associated with the processor because it defines the efficiency of the processor. The bus-core ratio can be explained as the number of front bus cycles consumed per number of clocks generated by processor. For a developer machine it is advisable to keep it to maximum compatible value. The RAM processes the data at 1’s of the clock. Some RAM’s processes the data at the positive slope of the clock only which are known as DDR2 whereas some processes the data at both the positive and negative slopes which are termed as DDR3. Due to above reason DDR3 performs twice as better as compared to DDR2. This helps us to categorized RAM as DDR2 or DDR3. The development machine shows up considerable responsiveness while running the softwares on DDR3 RAM.
This lets us to put the configuration of RAM as DDR3 - 8GB – more than 1333 MHz FSB.
Hard drive (Category: Performance): It depends on the number of softwares a development machine is expected to store. It contributes to the performance of the machine to a great extent. Based on their architecture they are categorized into two types: HDD and SSD. HDDs have rotating platters to store the data. The data is retrieved by the flux generated by rotating magnetic platters. The SSD do not have any rotating parts but it contains the chip design logic to store the data. SSD’s are 10 times faster than HDD but they are 4 times more costly. During the operation of the computer the major bottleneck could be the reading of the data from the hard disk in case of page faults. SSD can speed up the data fetch as compared to HDD and thereby improves the performance of the machine considerably giving better responses while doing software development.
This lets us to put the configuration of hard drive as SSD – 120 GB
Notebook screen (Category: Display): The screen of the notebook should be non-reflective while developing a software otherwise it may not be operated smoothly in the natural light. The non-glossy screens, however, reflects the colors in a dull shade. I feel this matters a least in software development scenario because software development is all about writing a code rather than building a user interface. This may not hold true for content writers or graphics designers and they may have to opt for glossy screens
This lets us to put the configuration of screen as 15.6” Matte finish
Graphics processing unit (Category: Display): The GPU matters a least in a software development machine. Let’s select the default option
Keyboard (Category: Operation): The keyboard is the equally important component while selecting the software development machine. Make sure the keyboard keys layout is exactly same as the one you are using right now. E.g. placement of delete or insert key is not different otherwise you may have to put extra efforts to remember the new locations of each key. The keyboard may be a backlit one if you are going to work in dark. The separate numeric pad may or may not be present as it is seldom used during software development. Another important requirement of software development is the usage of the function keys. The function keys should not be clubbed with multimedia buttons otherwise the user has to press fn key along with function key to make it work while debugging or applying any shortcut using function keys.
Speakers (Category: Operation): Only if you are a music fan. Select the default one as there are no options available
Number of USB (Category: Operation): The number of USB matters a lot because some developers have a habit of using external mouse, keyboard, and USB modem along with USB headsets, printer/scanner, USB device chargers. Hence it is advisable to get as many more USBs as possible. The ideal count would be 4-5.
The excellence of the development machines lies in their individual components and hence they should be picked up with utmost care. The developer should avoid getting attracted by the marketing tag lines of providing a fastest computer on the planet. Instead have a look at the individual components and compare it with the other brands and select the one that suits your requirements. It is also true that the most advance computer of today’s date will get outdated 3 years down the line and hence needs to be upgraded or a new purchase. Hope the post covers everything to look for while purchasing a new development machine.
Friday, July 6, 2012
Multithreading is the feature of the programming language that enables the developer to do the parallel processing in the software. You can listen to music while coding! The operating system creates process for each of the task to perform in parallel. The process is scheduled to run on the processor. The process can have one or multiple threads. The execution logic can be divided into multiple threads to perform the concurrent execution. e.g. The Dal thread can fetch the data from the database and UI thread populates them on the UI thereby reducing the wait time of the user to see the results. The programming languages provides lots of API's to operate on the threads and to manage them. In this blog post, lets discuss about the threads and paradigm of parallel programming.
In computer science terminology a thread of execution is the smallest unit of processing that can be scheduled by an operating system. A thread is a lightweight process. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. (ref Wikipedia)
In terms of programming language, there can be two types of thread - main thread and worker thread. The program is run in the main thread. The worker thread is created from the main thread to share the workload of the main thread. The worker thread is joined to the main thread to consume the result in the main thread. The figure below depicts the scenario of the multithreaded application
.NET framework methods to operate on thread:
- Start - Start the thread execution
- Suspend - Suspend the thread execution
- Resume - Resumes the execution of the suspended thread
- Abort - Terminates the thread
Race condition:
Inter-thread and inter-process communication:
- Message passing: The mechanism to enable the thread to exchange the messages between each other. The messages should be generated based on the predefined schema and so that they it can be interpreted by the consumer threads. The typical example of message passing mechanism is MSMQ. The MSMQ stores the messages in the xml format. The format should be well understood by the consumer threads so as to enable consumer thread to read the messages that are generated by producer thread
- Synchronization: The mechanism by which the thread synchronizes themselves with the other threads so that all the threads uses the fresh and accurate copy of the data to avoid discrepancies in the final output is known as synchronization
- Pipes: This methodology sequences the thread execution and pass the result of one thread as an input to other.
- Shared memory: This methodology reserves some amount of the physical memory for all the threads to spool the data which can be shared by other threads. The memory area needs to be protected so as to ensure that one thread enters the shared memory at any point of time. These are often referred to as critical sections.
Let's discuss synchronization and shared memory concepts in details.
Types of thread synchronization:
- Mutex: The flag used by the threads to stop the other threads from entering the critical section. The thread sets the flag after it has entered the critical section and resets it when it exists the critical section. The other threads wait (busy waiting) on the flag and checks whether the flag has been reseted. They get chance to enter critical section when they see the flag has been reseted by the previous thread. In terms of programming language the mutex is essentially a boolean variable that hold the value of true/false.
- Semaphore: It is the special implementation of mutex except that it maintains the available count of the resources. This enables the operating system to avoid the race conditions because it predefines the number of resources that can be allocated. These are known as counting semaphores.
- Monitor: In concurrent programming, a monitor is an object or module intended to be used safely by more than one thread. The defining characteristic of a monitor is that its methods are executed with mutual exclusion. That is, at each point in time, at most one thread may be executing any of its methods. This mutual exclusion greatly simplifies reasoning about the implementation of monitors compared to reasoning about parallel code that updates a data structure (ref: Wikipedia)
.NET framework implementation of the thread synchronization and thread safety
- Lock: This keyword is used to ensure that only one thread enters the critical section. The lock is internally compiled as the monitor. The lock operates on the object type data structure. The sample code may look like as follows:
- Events: The events are the messages that are communicated between two threads. The event has two states: Wait and Signal. The wait state causes the thread to wait till the other thread does not signal it. The signal state enables the thread to enter the critical section which means the previous thread has exited the critical section. There are two kinds of synchronization events in .NET:
- AutoResetEvent: The event is signaled automatically if the thread is waiting on auto reset event and the auto reset event returns to un-signaled state after the thread is allowed to enter the critical section. The thread needs to call Set() to explicitly signals the auto reset event which then allows other threads to enter the critical section
- ManualResetEvent: The thread resets the event and enters the critical section. After completing the execution the thread calls Set() to allow other waiting threads to acquire the lock to enter critical section.
- Monitor: The monitor is implemented using Enter() and Exit() methods on the Monitor class in .NET. The critical section is placed between Enter() and Exit() methods so that monitor ensure only thread enters the critical section at any given point of time.
Thread pooling:
Deadlock:
Multithreading on single-core and multi-core machines:
The multithreaded application does not yield good results when run on the single-core machine the code is run on the single core. There is also an overhead involved in managing the thread which is done by CLR e.g. context switching, loading the state from previous run, etc. The thread execution on the single core machine happens on the concept of the time-slice wherein all the thread executes on the processor for a given time span before the next thread is scheduled. The multithreaded application performs better on the multicore machines. The multicore machines has an ability to schedule the threads on the different cores hence achieves the parallel processing in the true sense
Usages of the multithreading concept:
Future of the multithreading:
Nowadays the multithreading has become a basic necessity of any software. There has been a great evolution in the processors as the manufacturers are putting more cores on the processor die. This facilitates the software writers to use multithreading extensively in their products to increase the responsiveness of the product.