Saturday, June 7, 2008

Not forgetting routine

Jett Atwood written a blog about The Greatest Invention in Computer Science and taking it from Steve McConnell, highlighted that routines is one of them. Reading the comments, it seems that not many supported his opinion. Personally, I will not debate on that as I think the scope is just too wide for Computer Science. But I will like to add some of my experience with them.

Learned about routines in the mid 90s when I first started programming in C, I am still religiously using them today in my OOP code. I think the article about routine brought out their importance which I have almost forgotten about them even I use them day in and day out in my code.

Recently, I was doing a lot of code review and was surprise that the concept of routines have been over shadowed by design patterns. Working on project that follows the Test Driven Development (TDD) Technique with a team that is pro to design pattern, some team members have similarly fallen into the design pattern panacea experienced by the author of the Refactoring to Patterns:
The power of patterns led me to lose sight of simpler ways to write code. After learning that there were two or three different ways to do a calculation, I’d immediately race towards implementing the Strategy pattern, when, in fact, a simple conditional expression would have been easier and faster to program-a perfect sufficient solution.

The pro here mean using design pattern without refactoring and doing it up-front with no added advantages that each pattern is suppose to promote. Logic is all coded into one single routine which not only violates the Single Responsibility Principle (SRP) and also increases the difficulties in unit testing. Therefore, I highlighted such code should be refactor into few routines with each routines having it own responsibility. I believe this is the basic of programming and should be practice before any use of design pattern.

I was also thrown interesting debate such as when writing everything in one routine instead of breaking them into a few routines, the call to a routine will not be duplicated. With further discussion, what was going to be duplicated after the refactoring will be one line of code. But this increases the chance of reuse.
Example: A routine that calculate value A and B using a complex formula which B is derived from a complex repository and value A is an input parameter. The calculation and the complex repository are all written in one routine. To separate the added responsibility, the derived of value B should be separated into another routine and probably pass into the calculation routine. But with this, if more than 2 places that need to call the calculation routine will also need to call the routine to derive the value B before the calculation routine. The call to the routine to derive the value B will be duplicated in these 2 places with one single line of code. But imagine the reuse of the calculation routine with a different derived value such as value C.

Writing many lines of codes in a single routine with multiple responsibilities, makes code reading, future refactoring and testing more difficult. So do not forget the basic concept of routine programming over the so called “flexible, sophisticated and elegant” of design pattern.

Friday, May 16, 2008

Source control with continuous integration process

Recently, while reading the book on Continuous Integration Improving Software Quality and Reduce Risk, I chance upon the following practices:

Commit code frequently. Try to make small changes and commit each task when is completed. Waiting more than a day or so to commit code to the version control repository makes integration time-consuming and may prevent developers from being able to use the latest changes. The longer you wait to integrate with others, the more difficult your integration will prove to be. Developers do not want to commit their code until it is “perfect”.

Yes, commit code frequently is very important especially in a CI (Continuous Integration) environment. But how frequent is frequent? Above suggested more than a day is too long and should day be used as the benchmark? In usual practice, task often assigned can be further breakdown into many smaller units. Does that mean we can commit this small unit each time is completed? Yes, but that will probably result in codes committed every hour. Other than that, usually codes are developed in the smallest unit before you work on the bigger units which depend on the smaller unit. Therefore, if there is no other part of the project having dependency on this small unit, then there isn’t really any advantage to commit the codes to allow other to make use of them.

From experience, people working on very closely related task often encountered conflict when attempting to merge codes. So even merging codes is a problem, not to say about integration. Therefore, closely related task assigned to different developers have to properly manage. The management of such process is very important.

Using source control is highly recommended, but I noticed that some common practice on committing, updating of source control are seldom discussed. Was it because everyone knows about these practices? I doubt so, with my recently experience working on a project with 6 developers of some with more than 5 years experience.

  1. One single developer can be committing codes up to 5 times in a span of 2 min. What happen here? These people are committing codes not from highest trunk level but from each folder where changes are made. Imagine you happen to get an update in between the 5 commit? Build fail as probably your update did not capture the last 2 commit which the first 3 commit have a dependency on the last 2.
  1. Every time I get an update of the source, my build fail. And usually, this is not due to a single fault where someone makes a mistake, but very often 3-5 mistakes are found that have cause the build to fail. What happen here? Partly due to point 1 and codes are not fully committed. Committing one folder but not the other which they have a dependency.
  1. Test fail. Developer are not getting an update and running test locally to ensure latest version merged and integrated seamlessly with their changes.

The consequence of such ill disciplines resulted in many man hours lost. I spend an average of half to an hour of time going thru the error and hunting down the person causing the build to fail each time I get an update. While trying to get to the bottom of this problem, I realize that the Cruise Control never had a single successful build for a week with up to an average of 40 build a day. Digging deeper and after much questioning, I realized that the some project have been removed from the build due to some testing causing the whole build process to hang.

Not to give up, I added those projects in and wanted to see how where the problem is. One amazing problem found was a test actually open up a notepad and cause the whole build to hang there. Next, a test in one of the project keeps causing an error on NANT which need human intervention to go into the build server to close the error prompt. Failing to do that, result in NANT time out and each build just stalled there. Worst still, no one make any noise about this issue.

Therefore in order to minimize such occurrence again, I have come out with the following guideline while using the source control and continuous integration:

  1. Get an update before working on any task.
  2. Work on the task with necessary test executed successfully.
  3. Get an update again before any commit.
  4. Run all test again to ensure that you task or changes integrate with the latest version without any problem.
  5. Commit your task in one revision.
  6. Check that build on the build server run successfully.
  7. Fix or rollback any error reported immediately.

Lastly, the book mentioned above is a great for anyone that are interested to implement the continuous integration process or enhance their existing process. The book covered the database, source code, test and deployment continuous integration. Imagine without this process and having to encounter similar problem above will be a nightmare of getting things right.

Tuesday, April 29, 2008

Experience with Domain Driven Design

Recently, while having a discussion with a group of colleagues that are new to the Domain Driven Design (DDD) approach, some points highlighted by these colleagues truly reflect how I first felt when I tried to understand the approach from the book and applying it. Probably for anyone new to this and like to apply the DDD, the whole approach can be sub divided into a few areas to provide a better segregation of understanding and implementation.

Understanding the concept
I will say that it is not that difficult to understand the concept from reading the book. The first part of the book with the first three chapters well describes the concept. This concept covers mainly the design approach with DDD. Personally, I think the concept is powerful, clean and easy to understand and use to drive most design. With part 1 of the video by Eric Evan on InfoQ a better picture can be provided straight from the horse’s mouth. Having a good understand of the concept is important before moving into the implementation.

Driving the Design
Modeling is the main utility in the DDD. The concept used, clearly defined how to design the model and how everyone within the project use the model as the backbone for discussion to the actual implementation in codes. This clearly reflects back to the concept of DDD. One important factor during the design is not to add other complexity such as infrastructure into the design and the domain model. So far, there wasn't much problem with the adoption of this approach during the design phase and the domain model really provide the team with a solid backbone from the design to the implementation.

Implementation of the Design into Codes
If you have noticed, I have not said much of the concept and driving the design much above, as I believe what is mentioned in the DDD book is more than enough to give you a good start. However do note the following remarks found in chapter 3 of the book:


Developing a single model that captures the problem and provides a practical design is easier said than done. You can't just take any model and turn it into a workable design. The model has to be carefully crafted to make for a practical implementation. Design and implementation techniques have to be employed that allow code to express a model effectively. Knowledge crunchers explore model options and refine them into practical software elements. Development becomes an iterative process of refining the model, the design, and the code as a single activity.
To me, the actual complexity of the whole DDD approach is in the implementation of the design into codes. This is the greatest barrier I had when I first tried to transfer the design into codes. In part two of the DDD book, certain design patterns are mentioned on how the design or model can be implemented into codes, however I find these part can be greatly improve on. Even after reading the Applying Domain Driven Design and Patterns with Examples in C# and .Net by Jimmy Nilsson does not help much. Not trying to say that the book is not good, but just probably not enough to transits from the design into codes.

Phillip Calcado has written an article on Repository Trouble and I believe this is another example of how much certain doubts on the implementation are not clearly explained and probably due to the lack of example. From my understanding, the main responsibility of the repository as mentioned in the book:

Hide all the inner workings from the client, so that client code will be the same whether the data is stored in an object database, stored in a relational database, or simply held in memory. The repository will delegate to the appropriate infrastructure services to get the job done.

Example
In a domain with a user object, this user will probably have a few implementation methods such as create user, inactive user, get user information and many others... Each method will need to persist or retrieve the user object, but each method should not be aware of where this information is persisting thru DAO or LDAP or even file. The repository should take care of this responsibility and in return hiding the persistence and the infrastructure from the domain layer. Without this repository, each method will be required to know where to persist each object.

Other than what is mentioned, the repository also help to enforce certain domain logic such as preventing queries from pulling exact data rather than navigating from aggregate roots.

Will continue to discuss more on the DDD implementation in subsequent post...

Additional videos by Eric Evans
Part 2: Strategic Design
Part 3: Domain-Driven Design and Domain Specific Languages

Tuesday, April 15, 2008

Unit Testing of Non Public Class and Method

Testing private, internal and protected class and method has always been a debatable issue in the test driven development world. Recently, while working on a project, I have to test all these non public classes and methods as there are some complex logic in it and their accessibility have to be restricted. In order, to ensure each method is working correctly, unit test have to be conducted on them from another test project.

The whole solution is structure in such a way that the test project is separated from the application project. Will not going into the discussion on how the project for the actual code and test should be structured, but here a poll result on the preference. Since the test project is separated from the application project, the test project will not have any access to the internal access modifier class and method.

Tim Stall has provided a great overview on testing these non public methods and a helper class in Code Project for those that need to test them. While working on the test, I have encountered other problems with scenario like Expected Exception Testing of Private Method and Internal Class with Private Static Method which the helper class utility were unable to take care of. Therefore, after some research and testing, I have made some modification to the code and how the helper class method can be called to take care of the 2 scenarios mentioned above.

Expected Exception Testing of Private Method
The helper class provided by Tim Stall makes testing of private method so much easier, but what happen when the private method will throw exception in some scenario and how can such test be done? Excepted Exception can be used to test for exception thrown by a method, however with the helper class, any exception throw from the actual method will be wrapped with the System.Reflection.TargetInvocationException. This will prevent the actual exception to be captured by the test. In order to capture the actual exception, the catch in the RunMethod found in the helper class have to be replaced with the following code:
catch (Exception ex)
{
if (ex.InnerException != null)
throw ex.GetBaseException();
else
throw;
}
When an exception have been rethrow, the InnerException property should contain a reference to the base exception and it will be null if the exception is the original exception thrown. Therefore if the exception captured above is not the base exception, it will re-thrown the base (actual) exception so that it can be captured by the test.

Internal Class with Private Method
To test a private method with a public class, the RunStaticMethod have to be called with the following parameters:
RunStaticMethod(System.Type t, string strMethod, object[] aobjParams)
Since the class itself is internal, the reference of the object cannot be done from the test project (compile error when trying to reference to an internal class). To workaround this issue, some people have suggested to make the application project and test project as friend assemblies, but this follow on with other complication such as StrongNameIdentity and Public Key thingy. With the use of reflection and 2 lines of codes, the System Type of the internal class can be gotten to be input into the RunStaticMethod.

Firstly, get the reference assembly of the internal class using the reference of a public class belong to the same assembly:
System.Reflection.Assembly assembly = Assembly.GetAssembly(typeof(PublicClass));
Secondly, get the system type of the internal class to be input into RunStaticMethod with this:
System.Type sysType = assembly.GetType("TechRockGuy.InternalClass");
With these 2 scenarios, I guess other combination of class and method access modifier can be taken care of with the helper class and the modification made here. For those that need to test a private class with private method, how the instantiation of a new object is not mentioned here, but with dependency injection that should be able to help.

Saturday, April 5, 2008

Assembly analyzer

Release of Framework Design Studio!

Trying out this new tool to compare a class in two different assemblies, red and green colour are used to differentiate the line of codes on what is added and removed respectively. More significantly, what is inherited was also displayed in grey and this greatly improves the analyzing of the assemblies without hunting down other class to get a picture of what is inherited. Comments can also be added during the review of the assembly and even be exported. This will definitely be helpful when code reviewing is required.

The diff GUI is something difference from the usual text comparison tools (such as beyond compare and win merge) where the two different versions are displayed side by side. Personally, I am still trying to get use to it, as it does not allow me to compare the difference on each line of code but rather it will displayed line in different colour. The tree can also provide some information on whether any difference found in each class. This will speed up on the filtering process since an assembly has many classes and going thru each one will be time consuming.


On the other hand, the
Reflector for .NET has been my favorite assembly analyzing tool so far as it allows me to analyze assemblies and even the resources files in it. Another great feature provided, is the analyzer also show the dependency on other assemblies. With these two great tools available, they will definitely improve the life of many developers. All thanks to everyone involved in getting these two great tools developed. For those that still unaware of these two tools, I do urged you to check them out if you ever need to analyze any assembly for good.

Monday, March 24, 2008

Problem solving is solving problem

Working as a software developer, problem solving is the fundamental skill we use day in day out. The Wikipedia provided a long list of problem solving techniques, however I am unable to find the new problem solving technique that I have learned today. This new technique is something that I have not learned or heard of before, so I am trying to figure out whether this technique is ever practice and it usefulness. The scenario where I have learned this new problem solving technique is actually not directly related to my area of job scope, but from some policy decision making from the management.

Today, my company had a meeting due to some implementation to some new policy. A benefit or entitlement (will discussed again if it should be benefit or entitlement) was removed from everyone and there was quite a bit of questioning here and there. Interestingly, when ask about the reason on why such entitlement (let just call it entitlement here first) have been removed, astonishingly the reason given was due to someone abusing this entitlement and giving problem to the people from the management.

It keeps me wondering for awhile whether this is a root cause analysis problem solving technique. Suddenly, with this new policy implemented, the problem (people abusing) are gone. Isn't it wonderful that the problem has not only been removed, but also problem related to such entitlement will never ever re-surface again!

In actual fact, the problem have been removed, however the subject have also removed totally. Don't remember ever applying such problem solving technique to my area of work. On the other hand, will such technique be really applicable in the future? Imagine, the login part of an application that you are working on, have some security loopholes. Probably, the easiest way of solving this security loophole is by removing them totally. But, things are there for a purpose and removing an object from what it is meant for need a lot of justification.

Application is developed to solve problems.... and imagine when a client coming to you and telling you all his problems, with you replying, "remove everything". Is this how thing going to work. No! Cause nothing will exist in this world, as everything will have its problem. Do not solve a problem at the convenience of removing things without any justification. Problem solving is solving problem, it is different from solving a problem by ignoring a problem and removing everything related to it totally.

Wednesday, March 5, 2008

Learning to swim

Recently, while having a discussion with a colleague, one of the topics touched on was about the way we learn how to swim. Using the way of learning how to swim as a metaphor on how we survive in a real world, the different approaches will often result in different outcome on every single individual’s survival skill.

The two common approaches
The easy way of learning how to swim is to learn from the shallow end of the pool where the swimmer will always be able to touch the floor of the pool. This way, it allows the swimmer to stand and walk when he can’t cope with the swim. On the other hand, the hard way of learning is to learn from the deep end of the pool where the swimmer either swim or drown.

The resulting outcome
With the easy way, swimmers do not have to try as hard, as they can give up easily by taking a rest by standing on the pool. Their determination will not be as strong and they might find it hard to survive when met with the slightest challenge in life. This learning process will also take longer usually. With the hard way, a swimmer will either survive or drown. The swimmer will be struggling throughout the learning process, and probably learning to swim within a shorter time. Any swimmer that survives this learning process will usually be able to take up similar challenge easily in life and have a stronger determination.

Which is a better way and more significant to an organization?
From the above, it seems pretty obvious that the hard way of learning to swim is the best way. But how does this really relate into the real world and within an organization? How often does an organization required the people within the organization to only possess the basic survival skill? Yes, survival skill is probably the basic pre-requisite; however this will most likely put an organization also on the survival line too if all their human resources consist of people with surviving skill.

The ability to excel rather than just the skill to survive should be more significant to an organization that wants to excel. Anyone who joins the working life should have acquired the survival skill in the early years (during the school day). Organization with people that excel will be the companies that make the leap. People with the skill to survive should be easily available, however people that excel will always be in demand. All the people that excel should have been thru the surviving stage in life. So how did these people progress?

How to bring the best out of everyone
Will a swimmer be able to excel by his/her own without any coach or guidance and how does a person move from the ability to swim to excelling it? All professional swimmers have coach with proper training method, process. The coach will be there to coach and provide the necessary guidance for the swimmer, highlighting to the swimmer if his/her technique is incorrect or analyzing on how to help the swimmer to excel in the swim. People that excel also have a passion with what they do. Since finding people that excel is hard, why don’t you train people to excel?

Friday, February 22, 2008

Quantity of test must come with quality

For those that are familiar with TDD, you should be aware of what code coverage is. For those that don’t, do check out the paper written by Brian Marick “How to Misuse Code Coverage”!

From my perspective, code coverage measures certain quantity and not the quality of the test on the code. Quantity figure defined can be easily achieved without any quality. Imagine writing test that just assert whether an object is null will put those line of codes or sequence points visited. However, does this test really visit the complexity involved? If you have one test to cover a method and if you have five tests to cover the same method, the coverage is still the same. These five tests might be written to provide a more exhaustive test (Phil Haack has provided a good overview about the
row test).

How many times are you thrown a code coverage figure to be achieved? I believe the easiest figure to be defined is 100%. But will a project be able to meet this figure realistically, given the resources? How many have written test for the sake of achieving this code coverage figure but without any quality. Probably is important for people who determine this figure to be achieved to make some thoughtful analysis of what is more important. Having read a couple of times that the 100% is unrealistic in many projects, so what make a good and realistic figure? How many have been given a figure of 80%? Was this figure pluck from the air? I often don't see how this figure is derived. To me more importantly, certain guidelines need to be defined what need to be achieved from the test suites rather that just a quantity figure.

The guidelines should define the quality of the test, what need to be tested and what is not (this fall outside the code coverage figure). Most often, tests need to cover the core business domain logic, to prove that they not only pass but all the complexity in business logic defined are tested. Does a facade or delegate without any logic need to have any test? My answer should be most likely NO. Why? Usually a façade or delegate intend is to expose certain methods, so how are you going to test this intend? Accessibility concern can be easily detected when a project is complied.

Therefore, do have a good understanding of what you trying to achieve instead of letting tools available to determine what you can achieve without any significant purpose. Code coverage tools will be useful to provide an overview on the code coverage for the project, but the quality of code and test suite most likely can only be achieved thru a set of standard and guidelines defined with code review. Of course, other tools such as
NDepend which provide different types of code analysis are also available.

Thursday, February 7, 2008

Version control for database?

Having a database under version control have it own advantages and disadvantages. K.Scott Allen recently has written a five part series on database version control. I think his article provide a very good background on getting the database under version control. Jeff Atwood also highlighted that the database are very often neglected in most project. To my understanding these articles seems to have assume that all database should be kept under version control. I think the first question should be whether version control for a database is required? The following are a few questions to ask yourself before deciding on whether you should have your database kept under version control:


  1. How many people have the right to make changes to the database?
  2. Is there any problem with the development and testing with a constant change made to the database?
  3. How many environments do you have from development to production?
  4. How is the deployment for the changes to the other environment carried out?
  5. Have scripts not meant to be deployed, been deployed during deployment?
  6. Any review done to the changes made to the database?
Application with big and multiple databases usually have more that one single person having the right to change any part of the database and this can be worrisome, as things can get out of hand easily. Version control will be able to help in keeping track what the changes are made over time. This is very useful for everyone involve in the project, from the developer to the DBA as everyone will be able to track the changes made. By using a shared database, how often and badly does it affects other developers working on the same project? If the impact is unbearable or consuming too much time and effort, then probably keeping the development database separately might be good. However, there is trade off as keeping database in sync is not going to be an easy task without proper process and tool.

Having too many environments such as development, multiple testing environment and production is going to make deployment an uphill task. With each environment in the different phase of a project will make things even more complicated. Therefore, with version control of the database, the different version or phase at the point of time can be easily identified. This will perhaps make the change management easier to manage and prevent wrong version from been release to production. If you find yourself in the mess described, my advice is to go ahead and get your database under version control.

Imagine with a click on the checked in log, you can easily track all the changes done to a database. This is the advantages that not only allows you to just track changes, but it also allow you to review the changes done and any wrong doing. From experience, the process of keeping a database under version control is complicated and can be tedious. If the process is not well thought enough and people not having a clear picture of the version control, result will be the above issue coming back haunting on you again. Therefore, the initially phase in adapting to new process can be difficult, but over the long run the result will obvious.


Monday, January 28, 2008

Filling up the gap on becoming a better developer

Reading the prefer design skills post written by Martin Fowler recently, I begin to look at the post from the perspective of an employee and a developer rather than looking from the perspective of an employer. As a programmer myself and agreeing with most of the reason highlighted as the preference, I believe this post can be used as one of the route to becoming a better programmer or developer.

Has design skill been neglected? With so many posts out there, providing valuable guidance on how to improve oneself to become a better developer, still the design skill is not often mentioned anywhere. So, the prefer design skills post probably helped to fill up one of the gap on becoming a better developer. The following are some excellent view point on becoming a better developer through technical and non-technical ways:
Other than looking at the prefer design skill post from a different perspective, I believe this post also open up the perspective of an employer on hiring a developer to anyone. Therefore, this is probably one of the criteria that your new employer will be looking for in you. In my opinion, the development world is so dynamic and fast-paced with new language, platform, hardware, software and etc been introduced now and then. As such, the need to be adaptable to change and keeping oneself updated with all these changes is as important, else a developer can become obsolete with the language and platform he or she is most comfortable with. Having broad skill and knowledge, a developer will be able to look at the different pro and cons of different platform, software and even hardware can provide. With this, a better choice can be made and instead of constraining to the limitation of what a single platform cannot provide.

Saturday, January 26, 2008

Nhibernate many-to-one (M:1) mapping

While working on a project recently, I started to analyze all the query generated by the Nhibernate for the different mapping available. Playing with the fetching strategy and lazy loading option, I will be able to see how the different queries are form. Interestingly, I realized that the M:1 mapping does not have a lazy loading option, and decided to further investigate more on this mapping.

M:1 mapping is use for object reference, the same idea as the M:1 association in the relational model. To test and see the query generated by the Nhibernate, I decided to load a class with M:1 mapping. To my surprise, the generated query pulled about 80% of the tables found in my database. Going back to the class, the class that I am trying to load have two many-to-one mapping and the referenced class also have other M:1 mapping within the class itself and the reference traverse for about four times.

With this, I started to reflect if there are something wrong with my class design and also wondering why the M:1 mapping does not support the lazy loading option? Not too sure but probably the idea of the many-to-one mapping reference object in terms of records in the database will always be smaller than the number of records for the referencing object. But imagine if the targeted growth for the referenced object is to peak to about a million records in the database and the referencing objects is about 10 million. The penalty of loading both the referencing and referenced object will be pretty obvious without the lazy loading option.

Apart from the lazy loading option, the default fetching strategy seems to be different from what is stated in the Nhibernate document for the M:1 mapping. Without specifying any fetching strategy for the M:1 mapping, the fetching strategy use is "Join". Only when it is explicitly specific to "Select", the select strategy is use. Therefore, I urged anyone using the Nhibernate with certain performance consideration in mind should start looking at the query generated by Nhibernate.

To workaround this issue, for all M:1 mapping without much targeted growth in records, the mapping will be maintain as the performance might be minimal. For mapping with a high targeted growth and with the referenced object information seldom used, the mapping relationship should be removed and only loaded when necessary through the referenced object. However, that means the logic of what to load have to be manually managed and it will be similar to the N+1 problem found with lazy loading. Ayende Rahien have some good explanation and the workaround for this N+1 problem.

Tuesday, January 22, 2008

Discussion with Jimmy Nilsson

Last week, my company has engaged Jimmy Nilsson to provide a consultation to a new project that we are working on. During this period, my colleagues and I had an intense discussion with him on topics like Domain Driven Design (DDD) and Test Driven Design (TDD). The discussion has opened up a lot of approaches, concepts and ideas to our new project. New to these topics and having read a few books on them, this discussion really provide a better picture of those missing part found in the books and clear up some of the assumption we have made.

Where is the most complex part of an application? Usually, the domain layer of the application is the most complex part as it contains the different complexity found in different domain. Therefore, design and maintenance of the domain complexity can be one of the greatest challenges in a big project. The
book Domain Driven Design Tackling Complexity in the Heart of Software by Eric Evans provides a good idea of how this challenge can be managed and overcome. Together with TDD and Agile software development approach, the book Applying Domain Driven Design and Patterns: With Example in C# and .NET by Jimmy Nilsson provide another approach of applying the DDD with.

During the discussion, Jimmy has given another definition of his on the TDD, which the focus is not only on testing, but also to be used as a driving tool for the domain design. The idea of this approach is to think and let the design evolve as the codes are written. As a team using this approach, he or she should not be afraid of changes. In software development, what is not changing is the amount of changes that we are seeing throughout the development lifecycle. As for how good the DDD and the Agile approach will turn out to be, I will come back again to review it as the project mature.




Saturday, January 19, 2008

My First Post

About me…

My New Year resolution was to create a personal blog to share my greatest interest and to give some reflection about what I have done by writing them down. Well, this is not my first time writing a post. I have written a few posts with a blog shared with a co-worker, but my co-worker has left the company I am working in for good. Anyway that
blog is only for techie stuff.

Not for any selfish reason, I will continue to post techie stuff on that blog if this co-worker of mine is still interested in keeping the blog active. Of course, this blog will be more than techie stuff. I am hoping to write and post some photo of one of my long time interest – Rock Climbing.

I reside in
Singapore and work as a software developer. Working as a developer, I need to constantly carry out research and read about the vast and different technologies, tools, approaches and ideas available in this fast paced sector. The dynamic nature of IT world resulted with a lot different thought and opinion from different people. Therefore, I hope to share some of my thought and opinion here.