"Id,""Category"",""Message""";;;;;; "312388,""Test"",""

I have a project where I've been using TDD and unit tests as """"software vises"""". In essence I translate the requirements into tests that verify that the code conforms to the requirements. I rarely have to go back and edit the unit tests, which rather is the point: only the """"real"""" code should be modified. At the moment, there are 900 unit tests.

";;;;;; ;;;;;;

Now some requirements have been changed by the gold-owners. Since the former requirements are so thorougly encoded in the existing unit tests, it seems that changing them to conform to the new requirements would be inviting disaster. How do you adapt your unit test suites to handle this kind of change?

;;;;;; " 312388"",""Test"",""

I have two answers to your question, one philosophical and the other tactical.

";;;;;; ;;;;;;

On the philosophical front it is important to consider your unit tests as code. That means all the normal traits of good code are usually appropriate for good tests: intention revealing, remove duplication, etc. Many, perhaps most, of the failures that I've seen with unit testing has come because people haven't treated their tests this way, but rather than just coded them and never revisited them to see if they should be refactored.

;;;;;; ;;;;;;

In my experience, if you've reached a point where your unit tests are a hinderance to change it is because you've got technical debt in your tests.

;;;;;; ;;;;;;

So my tactical suggestion is that before you attempt to change your requirements you look to refactor you tests. Each test should have a unique reason to pass/fail, and the behavior outside of that should be in shared code. This means that for any given behavior change you'll have two places to change the tests:

;;;;;; ;;;;;;
    ;;;;;;
  1. The test that actually validates that behavior
  2. ;;;;;;
  3. The places that behavior is used in shared fixture code
  4. ;;;;;;
;;;;;; ;;;;;; "

You might find this article useful: Grow Your Harness Naturally. It was really about a reusable test harness for functional testing but I find the ideas very useful in my unit tests as well.

";;;;;; " 312388"",""Test"",""

If your unit tests no longer match requirements then they shouldnt be there - after all - all they now tell you is that your code conforms to requirements that no longer exist!

";;;;;; ;;;;;;

Whenever you have a change in requirements you should alter the tests that represent the changed requirements and verify that the test now fail (wheras previously they all passed, right? ;))

;;;;; ;;;;;;

Then alter your source code so that the rewritten tests now pass.

;;;;;; " 312388"",""Test"",""
";;;;;;

Since the former requirements are so;;;;;; thorougly encoded in the existing unit;;;;;; tests, it seems that changing them to;;;;;; conform to the new requirements would;;;;;; be inviting disaster.

;;;;;;
;;;;;; ;;;;;;

Any specific reason why you would think so? I sense some fear or is it just 'don't break it when its working'

;;;;;; ;;;;;;

Change happens. In which case, it means more work-time-money. If the business has no problem with it, neither should you (unless the schedule is inhumane:). If the spec has changed,

;;;;;; ;;;;;; ;;;;;; " 312388"",""Test"",""
";;;;;;

In essence I translate the requirements into tests that verify that the code conforms to the requirements

;;;;;;
;;;;;; ;;;;;;

While I agree with Mnementh's answer, this, to me, is the key comment. If the tests are a translated version of the requirements, then if the requirements have changed, the tests must change.

;;;;;; ;;;;;;

Or they're testing for something that doesn't meet the requirements of the customer.

;;;;;; ;;;;;;
;;;;;; "

As John Maynard Keynes is reported to have said, """"When the facts change, I change my opinion. What do you do, sir?""""

";;;;;;
;;;;;; ;;;;;;

I think there's an analagous situation here. Your facts have been changed for you

;;;;;; " 312388"",""Test"",""

I would add the new tests and make them pass. Then you look at what tests have been broken as a result. If you believe the old tests are in contradiction to the new tests then you may have to remove the old tests. Otherwise, you alter the code to make the old tests pass as well.

";;;;;; " 312388"",""Test"",""

Per definition the unit-tests don't replicate the requirements for the application. They describe the requirements for a module. That's a difference, the module can be reused even in an application with different requirements or isn't used at all. So the changing requirements don't affect real unit-tests (except that you have to write new for new modules or abandon old tests for modules no longer needed for the changed requirements).

";;;;;; ;;;;;; "

On the other hand: acceptance-tests deal with the requirements on application-level. So I think you talk about acceptance-tests.

";;;;;; ;;;;;;

I would add the new requirements as new acceptance-test. But for the old ones you have to look through them, how they are invalidated by the changed requirements.

;;;;;; " 312388"",""Test"",""""";;;;;; "312388,""Test"",""How did you adapt your unit tests to deal with changing requirements?""";;;;;; "385730,""Test"",""

I have a number of projects in a solution file that have unit tests written for them and I am wanting to set them up to be run by our continuous integration server. However, because many of the tests have been written poorly and have not been run regularly there are many that are failing.

";;;;;; ;;;;;;

I don't have the time at the moment to fix all of the tests but I do believe there is value in having the existing tests run. What is the best way do deal with the failing Unit Tests?

;;;;;; ;;;;;;

What I am currently doing is marking each failing test as Explicit and leaving a TODO comment.

;;;;;; ;;;;;;
[Test, Explicit] //TODO: Rewrite this test because it fails;;;;;;
;;;;;; ;;;;;;

Is there a better way of doing this? Or should I fix all the tests before including them in the tests that are run by the CIS?

;;;;;; " 385730"",""Test"",""

What would you do with some other code that has accumulated technical debt?

";;;;;; ;;;;;;

If doing TDD (test first), Unit Tests do two things for you. One is to help design your objects with low coupling and high cohesion. These tests no longer are doing jack shit for you. The second, allows you to refactor your code without changing behavior.

;;;;;; ;;;;;;

It sounds like your failing tests are now an opportunity cost. In other words there are no longer adding value to your project. Just costing you money and time. Look at the time you spent wondering what to do with them? The tests are no longer valid.

;;;;;; ;;;;;;

IMHO, I would delete the tests. They are no longer covering the code, such that If you refactor the code the tests do not protect behavior. It's like having comments in your code that has changed, but the comments were never updated.

;;;;;; ;;;;;; "

If you do delete the tests you will need to treat the code that was supposedly covered by the tests as """"Legacy"""" (Feather's Definition).

";;;;;; " 385730"",""Test"",""

You are doing a good job setting up a continuous integration server running all your tests.

";;;;;; ;;;;;;

But what are disabled tests good for? They are like code commented out. Dead tests. As Jon ;;;;;; said: Make them run or delete them. Often it is better to write new ones if they are poorly written as you say.

;;;;;; ;;;;;;

But when will you have time to fix them? The tests are the only safety net, a software developer has when going further. You need to take the time or you will pay for it later. But maybe it would be take less time to write new tests...

;;;;;; " 385730"",""Test"",""
";;;;;;

I don't have the time at the moment to;;;;;; fix all of the tests

;;;;;;
;;;;;; ;;;;;;

I think you have something backward here...

;;;;;; ;;;;;;

If you really consider the tests to have value, then with respect I'd suggest you don't have time not to fix them. Right now, they're telling you that the software either doesn't do what it's supposed to do or that the tests are checking for something that no longer applies. Either way it's an indication that the process is broken somewhere.

;;;;;; ;;;;;;

So, particularly given the time of year and unless you have month- or year-end issues, I'd be making time to clean up either my tests or my code or both.

;;;;;; ;;;;;;

Seriously, what's the point of having tests if you don't listen to what they're telling you? And why bother running continuous integration if you can't trust what it does?

;;;;;; " 385730"",""Test"",""

I disagree with the idea of just deleting the test. If it looks like it should work but it doesn't, that's important information. If a test is mostly okay, but there's something environmental which is causing it to fail (e.g. reading a local file which is now in a different place) then it can easily provide value again when you find time to fix it. However:

";;;;;; ;;;;;; ;;;;;; " 385730"",""Test"",""

Well, in NUnit you have the option to ignore tests using the ignore attribute:

";;;;;; ;;;;;; "
[Test, Ignore(""""Test needs rewrite"""")]";;;;;;
;;;;;; ;;;;;;

Personally though, there are two things that I do with such tests:

;;;;;; ;;;;;; ;;;;;; ;;;;;;

Gleaning from what you've written I would suspect that many of those failing tests are out of date and may not be relevant in the first place, so I think it would be fine to delete them.

;;;;;; ;;;;;;

There's no point in keeping a test that nobody understands anyway.

;;;;;; ;;;;;; "

UPDATE: Oren Eini has a blog post which outlines most of how I feel about activating old, failing tests:

";;;;;; ;;;;;; "

The tests has no value by themselves: My most successful project didn't have any tests

";;;;;; ;;;;;;

To quote:

;;;;;; ;;;;;;
;;;;;;

Tests are a tool, and its usage should;;;;;; be evaluated against the usual metrics;;;;;; before applying it in a project. There;;;;;; are many reasons not to use tests, but;;;;;; " most of them boil down to: """"They add";;;;;; " friction to the process"""".

";;;;;;
;;;;;; ;;;;;;

If retrofitting old, failing tests adds friction to the process, maybe it isn't worth updating them at all.

;;;;;; " 385730"",""Test"",""

Since you have a running automated build (with test failure notification!), this sounds like it's time for 5-a-day (freely from ubuntu community):

";;;;;; ;;;;;;

In each test method that's failing insert the following (pseudocode):

;;;;;; ;;;;;;
if  ( DateTime.now < new DateTime(2008, 12, 24, 11, 00, 00)) return;;;;;
;;;;;; ;;;;;;

For every 5 times you insert this statement you advance tha date by one working day. Set the time-of-day at some time when you're likely to have time to fix the test.

;;;;;; ;;;;;;

When the working day arrives, you fix it or delete it.

;;;;;; " 385730"",""Test"",""""";;;;;; "385730,""Test"",""How do you deal with failing Unit Tests?""";;;;;; "410719,""Test"",""

This really, really urks me, so I hope that someone can give me a reasonable justification for why things are as they are.

";;;;;; ;;;;;;

NotImplementedException. You are pulling my leg, right?

;;;;;; ;;;;;; "

No, I'm not going to take the cheap stab at this by saying, """"hang on, the method is implemented - it throws a NotImplementedException."""" Yes, that's right, you have to implement the method to throw a NotImplementedException (unlike a pure virtual function call in C++ - now that makes sense!). While that's pretty damn funny, there is a more serious problem in my mind.

";;;;;; ;;;;;;

I just wonder, in the presence of the NotImplementedException, how can anyone do anything with .Net? Are you expected to wrap every abstract method call with a try catch block to guard against methods that might not be implemented? If you catch such an exception, what the heck are you supposed to do with it??

;;;;;; ;;;;;;

I see no way to test if a method is actually implemented without calling it. Since calling it may have side effects, I can't do all my checks up-front and then run my algorithm. I have to run my algorithm, catch NotImplementedExceptions and the some how roll back my application to some sane state.

;;;;;; ;;;;;;

It's crazy. Mad. Insane. So the question is: Why does the NotImplementedException exist?

;;;;;; ;;;;;; "

As a preemptive strike, I do not want anyone to respond with, """"because designers need to put this in the auto-generated code."""" This is horrid. I would rather the auto-generated code not compile until you supply an implementation. For example, the auto generated implementation could be """"throw NotImplementedException"; where the NotImplementedException is not defined!

;;;;; ;;;;;;

Has anyone ever caught and handled a NotImplementedException? Have you ever left a NotImplementedException in your code? If so, did this represent a time bomb (ie, you accidentally left it there), or a design flaw (the method should not be implemented and will never be called)?

;;;;;; ;;;;;;

I'm very suspicious of the NotSupportedException also... Not supported? What the? If it's not supported, why is it part of your interface? Can anyone at Microsoft spell improper inheritance? But I might start another question for that if I don't get too abuse for this one.

;;;;;; ;;;;;;

Additional info:

;;;;;; ;;;;;; "

This is an interesting read on the subject.

";;;;;; ;;;;;; "

There seems to be a strong agreement with Brad Abrams that """"NotImplementedException is for functionality that is just not yet implemented, but really should (and will be). Something like what you might start with when you are building a class, get all the methods there throwing NotImplementedException, then flush them out with real code…""""

";;;;;; ;;;;;; "

Comments from Jared Parsons are very weak and should probably be ignored: NotImplementedException: Throw this exception when a type does not implement a method for any other reason.

";;;;;; ;;;;;; "

The MSDN is even weaker on the subject, merely stating that, """"The exception that is thrown when a requested method or operation is not implemented.""""

";;;;;; " 410719"",""Test"",""

It's there to support a fairly common use case, a working but only partially completed API. Say I want to developers to test and evaluate my API - WashDishes() works, at least on my machine, but I haven't gotten around yet to coding up DryDishes(), let alone PutAwayDishes(). Rather than silently failing, or giving some cryptic error message, I can be quite clear about why DryDishes() doesn't work - I haven't implemented it yet.

";;;;;; ;;;;;;

Its sister exception NotSupportedException make sense mostly for provider models. Many dishwashers have a drying function, so belongs in the interface, but my discount dishwasher doesn't support it. I can let that be known via the NotSupportedException

;;;;;; " 410719"",""Test"",""

There is one situation I find it useful: TDD.

";;;;;; ;;;;;;

I write my tests, then I create stubs so the tests compile. Those stubs do nothing but throw new NotImplementedException();. This way the tests will fail by default, no matter what. If I used some dummy return value, it might generate false positives. Now that all tests compile and fail because there is no implementation, I tackle those stubs.

;;;;; ;;;;;;

Since I never use a NotImplementedException in any other situation, no NotImplementedException will ever pass onto release code, since it will always make some test fail.

;;;;;; ;;;;;;

You don't need to catch it all over the place. Good APIs document the exceptions thrown. Those are the ones you should look for.

;;;;;; ;;;;;;

EDIT: I wrote an FxCop rule to find them.

;;;;;; ;;;;;;

This is the code:

;;;;;; ;;;;;;
using System;;;;;;
using Microsoft.FxCop.Sdk;;;;;;
;;;;;;
/// <summary>;;;;
/// An FxCop rule to ensure no <"see cref=""""NotImplementedException""""/>"; is;;;;
/// left behind on production code.;;;;;;
/// </summary>;;;;
internal class DoNotRaiseNotImplementedException : BaseIntrospectionRule;;;;;;
{;;;;;;
    private TypeNode _notImplementedException;;;;;;
    private Member _currentMember;;;;;;
;;;;;;
    public DoNotRaiseNotImplementedException();;;;;;
"        : base(""""DoNotRaiseNotImplementedException"""",";;;;;;
               // The following string must be the assembly name (here;;;;;;
               // Bevonn.CodeAnalysis) followed by a dot and then the;;;;;;
               // metadata file name without the xml extension (here;;;;;;
               // DesignRules). See the note at the end for more details.;;;;;;
"               """"Bevonn.CodeAnalysis.DesignRules"""",";;;;;;
               typeof (DoNotRaiseNotImplementedException).Assembly) { };;;;;;
;;;;;;
    public override void BeforeAnalysis();;;;;;
    {;;;;;;
        base.BeforeAnalysis();;;;;;
        _notImplementedException = FrameworkAssemblies.Mscorlib.GetType(;;;;;;
"            Identifier.For(""""System""""),";;;;;;
"            Identifier.For(""""NotImplementedException""""))";;;;;;
    };;;;;;
;;;;;;
    public override ProblemCollection Check(Member member);;;;;;
    {;;;;;;
        var method = member as Method;;;;;;
        if (method != null);;;;;;
        {;;;;;;
            _currentMember = member;;;;;;
            VisitStatements(method.Body.Statements);;;;;;
        };;;;;;
        return Problems;;;;;;
    };;;;;;
;;;;;;
    public override void VisitThrow(ThrowNode throwInstruction);;;;;;
    {;;;;;;
        if (throwInstruction.Expression != null &&;;;;
            throwInstruction.Expression.Type.IsAssignableTo(_notImplementedException));;;;;;
        {;;;;;;
            var problem = new Problem(;;;;;;
                GetResolution(),;;;;;;
                throwInstruction.SourceContext,;;;;;;
                _currentMember.Name.Name);;;;;;
            Problems.Add(problem);;;;;;
        };;;;;;
    };;;;;;
};;;;;;
;;;;;; ;;;;;;

And this is the rule metadata:

;;;;;; ;;;;;;
<"?xml version=""""1.0"""" encoding=""""utf-8"""" ?>";;;;;
<"Rules FriendlyName=""""Bevonn Design Rules"""">";;;;;
  <"Rule TypeName=""""DoNotRaiseNotImplementedException"""" Category=""""Bevonn.Design"""" CheckId=""""BCA0001"""">";;;;;
    <Name>Do not raise NotImplementedException</Name>;;
    <Description>NotImplementedException should not be used in production code.</Description>;;
    <Url>http://stackoverflow.com/questions/410719/notimplementedexception-are-they-kidding-me</Url>;;
    <Resolution>Implement the method or property accessor.</Resolution>;;
    <"MessageLevel Certainty=""""100"""">";CriticalError</MessageLevel>;;
    <Email></Email>;;
    <FixCategories>NonBreaking</FixCategories>;;
    <Owner></Owner>;;
  </Rule>;;;;
</Rules>;;;;
;;;;;; ;;;;;;

To build this you need to:

;;;;;; ;;;;;;
    ;;;;;;
  • reference Microsoft.FxCop.Sdk.dll and Microsoft.Cci.dll

  • ;;;;;;
  • Put the metadata in a file called DesignRules.xml and add it as an embedded resource to your assembly

  • ;;;;;;
  • Name your assembly Bevonn.CodeAnalysis. If you want to use different names for either the metadata or the assembly files, make sure you change the second parameter to the base constructor accordingly.

  • ;;;;;;
;;;;;; ;;;;;;

Then simply add the resulting assembly to your FxCop rules and take those damned exceptions out of your precious code. There are some corner cases where it won't report a NotImplementedException when one is thrown but I really think you are hopeless if you're actually writing such cthulhian code. For normal uses, i.e. throw new NotImplementedException();, it works, and that is all that matters.

;;;;; " 410719"",""Test"",""

I'll summarize my views on this in one place, since they're scattered throughout a few comments:

";;;;;; ;;;;;;
    ;;;;;;
  1. You use NotImplementedException to indicate that an interface member isn't yet implemented, but will be. You combine this with automated unit testing or QA testing to identify features which still need to be implemented.

  2. ;;;;;;
  3. Once the feature is implemented, you remove the NotImplementedException. New unit tests are written for the feature to ensure that it works properly.

  4. ;;;;;;
  5. NotSupportedException is generally used for providers that don't support features that don't make sense for specific types. In those cases, the specific types throw the exception, the clients catch them and handle them as appropriate.

  6. ;;;;;;
  7. The reason that both NotImplementedException and NotSupportedException exist in the Framework is simple: the situations that lead to them are common, so it makes sense to define them in the Framework, so that developers don't have to keep redefining them. Also, it makes it easy for clients to know which exception to catch (especially in the context of a unit test). If you have to define your own exception, they have to figure out which exception to catch, which is at the very least a counter-productive time sink, and frequently incorrect.

  8. ;;;;;;
;;;;;; " 410719"",""Test"",""

The NotImplementedException exists only to facilitate development. Imagine you start implementing an interface. You'd like to be able to at least build when you are done implementing one method, before moving to the next. Stubbing the NotImplemented methods with NotImplementedExceptions is a great way of living unfinished code that is super easy to spot later. Otherwise you would run the risk of quickly implementing something that you might forget to fix.

";;;;;; " 410719"",""Test"",""

I have a few NotImplementedExceptions in my code. Often times it comes from part of an interface or abstract class. Some methods I feel I may need in the future, they make sense as being part of the class, but I just don't want to take the time to add unless I actually need it. For example, I have an interface for all the individual kinds of stats in my game. One of those kinds are a ModStat, which is the sum of the base stat plus all the modifiers (ie weapons, armor, spells). My stat interface has an OnChanged event, but my ModStat works by calculating the sum of all stats it references each time it is called. So instead of having the overhead of a ton of ModStat.OnChange events being raised every time a stat changes, I just have a NotImplementedException thrown if anyone tries to add/remove a listener to OnChange.

";;;;;; ;;;;;;

.NET languages are all about productivity, so why spend your time coding something you won't even use?

;;;;;; " 410719"",""Test"",""

What about prototypes or unfinished projects?

";;;;;; ;;;;;;

I don't think this is a really bad idea to use an exception (although I use a messagebox in that case).

;;;;;; " 410719"",""Test"",""

Here is one example: In Java, whenever you implement the interface Iterator, you have to override the obvious methods hasNext() and next(), but there is also delete(). In 99% of the usecases I have I do not need this, so I just throw a NotImplementedException. This is much better than silently doing nothing.

";;;;;; " 410719"",""Test"",""

Well, I somewhat agree. If an interface has been made in such a way that not all class can implement all bits of it, it should've been broken down in my opinion.

";;;;;; ;;;;;;

If IList can or cannot be modified, it should've been broken down into two, one for the unmodifiable part (getters, lookup, etc.), and one for the modifiable part (setters, add, remove, etc.).

;;;;;; " 410719"",""Test"",""

They are both hacks for two common problems.

";;;;;; ;;;;;;

NotImplementedException is a workaround for developers who are architecture astronauts and like to write down the API first, code later. Obviously, since this is not a incremental process, you can't implement all at once and therefore you want to pretend you are semi-done by throwing NotImplementedException.

;;;;;; ;;;;;;

NotSupportedException is a hack around the limitation of the type systems like those found in C# and Java. In these type systems, you say that a Rectangle 'is a' Shape iff Rectangle inherits all of Shapes characteristics (incl. member functions + variables). However, in practice, this is not true. For example, a Square is a Rectangle, but a Square is a restriction of a Rectangle, not a generalization.

;;;;;; ;;;;;;

So when you want to inherit and restrict the behavior of the parent class, you throw NotSupported on methods which do not make sense for the restriction.

;;;;;; " 410719"",""Test"",""

Rarely I do use it for interface fixing. Assume that you've an interface that you need to comply but certain method will be never called by anyone, so just stick a NotImplementedException and if someone calls it they will know they are doing something wrong.

";;;;;; " 410719"",""Test"",""

NotImplementedException is thrown for some method of .NET (see the parser C# in Code DOM which is not implemented, but the method exist !)";;;;;; You can verify with this method Microsoft.CSharp.CSharpCodeProvider.Parse

;;;;;; " 410719"",""Test"",""

From ECMA-335, the CLI specification, specificialy the CLI Library Types, System.NotImplementedException, remarks section:

";;;;;; ;;;;;; "

""""A number of the types and constructs, specified elsewhere in this Standard, are not required of CLI implementations that conform only to the Kernel Profile. For example, the floating-point feature set consists of the floating-point data types System.Single and System.Double. If support for these is omitted from an implementation, any attempt to reference a signature that includes the floating-point data types results in an exception of type System.NotImplementedException.""""

";;;;;; ;;;;;; "

So, the exception is intended for implementations that implement only minimal conformance profiles. The minimum required profile is the Kernel Profile (see ECMA-335 4th edition - Partition IV, section 3), which includes the BCL, which is why the exception is included in the """"core API"""", and not in some other location.

";;;;;; ;;;;;;

Using the exception to denote stubbed methods, or for designer generated methods lacking implementation is to misunderstand the intent of the exception.

;;;;;; ;;;;;;

As to why this information is NOT included in the MSDN documentation for MS's implementation of the CLI is beyond me.

;;;;;; " 410719"",""Test"",""

I can't vouch for NotImplementedException (I mostly agree with your view) but I've used NotSupportedException extensively in the core library we use at work. The DatabaseController, for example, allows you to create a database of any supported type then use the DatabaseController class throughout the rest of your code without caring too much about the type of database underneath. Fairly basic stuff, right? Where NotSupportedException comes in handy (and where I would have used my own implementation if one didn't already exist) is two main instances:

";;;;;; ;;;;;;

1) Migrating an application to a different database;;;;;; It's often argued this rarely, if ever, happens or is needed. Bullsh*t. Get out more.

;;;;;; ;;;;;;

2) Same database, different driver;;;;;; Most recent example of this was when a client who uses an Access-backed application upgraded from WinXP to Win7 x64. Being no 64-bit JET driver, their IT guy installed the AccessDatabaseEngine instead. When our app crashed, we could easily see from the log it was DB.Connect crashing with NotSupportedException - which we were quickly able to address. Another recent example was one of our programmers trying to use transactions on an Access database. Even though Access supports transactions, our library doesn't support Access transactions (for reasons outside the scope of this article). NotSupportedException, it's your time to shine!

;;;;;; ;;;;;;

3) Generic functions;;;;;; "I can't think of a concise """"from experience"""" example here but if you think about something like a function that adds an attachment to an email, you want it to be able to take a few common files like JPEGs, anything derived from various stream classes, and virtually anything which has a """".ToString"""" method. For the latter part, you certainly can't account for every possible type so you make it generic. When a user passes OurCrazyDataTypeForContainingProprietarySpreadsheetData, use reflection to test for the presence of a ToString method and return NotSupportedException to indicate lack of support for said data types that don't support ToString.

";;;;;; ;;;;;;

NotSupportedException isn't by any means a crucial feature but it's something I find myself using a lot more as I work on larger projects.

;;;;;; " 410719"",""Test"",""

NotImplementedException

";;;;;; ;;;;;;

The exception is thrown when a requested method or operation is not implemented.

;;;;;; ;;;;;;

Making this a single exception defined in the .NET core makes it easier to find and eradicate them. If every developer should create their own ACME.EmaNymton.NotImplementedException it would be harder to find all of them.

;;;;;; ;;;;;;

NotSupportedException

;;;;;; ;;;;;;

The exception is thrown when an invoked method is not supported.

;;;;;; ;;;;;;

For instance when there is an attempt to read, seek, or write to a stream that does not support the invoked functionality.

;;;;;; ;;;;;;

For instance generated iterators (using yield keyword) is-a IEnumerator, but the IEnumerator.Reset method throws NotSupportedException.

;;;;;; " 410719"",""Test"",""

Lets say you have this method in your production code

";;;;;; ;;;;;;
public void DoSomething();;;;;;
;;;;;; ;;;;;;

Which one would you take if you want to leave it later to be finished?

;;;;;; ;;;;;;
public void DoSomething();;;;;;
{;;;;;;
};;;;;;
;;;;;; ;;;;;;

or

;;;;;; ;;;;;;
public void DoSomething();;;;;;
{;;;;;;
    throw new NotImplementedException();;;;;;
};;;;;;
;;;;;; ;;;;;;

I would certainly take the second. Coupled with Elmah or whatever error logging mechanism you have (implemented as an aspect across your entire application). Together with log/exception filtering to trigger for critial error email notification when one is caught.

;;;;;; ;;;;;;

The argument that NotImplementedException == unfinished isn't correct either. (1) Catching of unimplemented methods should be left to unit tests/ integration tests. If you have 100% coverage (which you should now do, with so many many mock/stub/code generation tools) with no NotImplementedException, what are your worries? (2) Code generation. Plain simple. Again, if I generate the code, and only use half of the generated code, why wouldn't I have NotImplementedException in the rest of the generated stub?

;;;;;; ;;;;;;

It's like saying code shouldn't compile unless every nullable input should be checked/handled for null. (AKA the trillion dollar mistake, if not more). Language should be flexible, while tests/contracts should be solid.

;;;;;; " 410719"",""Test"",""

This sounds like a potential minefield to me. In the distant past I once worked on a legacy network system that had been running nonstop for years and which fell over one day. When we tracked the problem down, we found some code that had clearly not been finished and which could never have worked - literally, like the programmer got interrupted during coding it. It was obvious that this particular code path had never been taken before.

";;;;;; ;;;;;;

Murphy's law says that something similar is just begging to happen in the case of NotImplementedException. Granted in these days of TDD etc, it should be picked up before release, and at least you can grep code for that exception before release, but still.

;;;;;; ;;;;;;

When testing it is difficult to guarantee coverage of every case, and this sounds like it makes your job harder by making run time issues of what could have been compile time issues. (I think a similar sort of 'technical debt' comes with systems that rely heavily on 'duck typing', while I acknowledge they are very useful).

;;;;;; " 410719"",""Test"",""

You need this exception for COM interop. It's E_NOTIMPL. The linked blog also shows other reasons

";;;;;; " 410719"",""Test"",""

Throwing NotImplementedException is the most logical way for the IDE to to generate compiling stub code. Like when you extend the interface and get Visual Studio to stub it for you.

";;;;;; ;;;;;;

If you did a bit of C++/COM, that existed there as well, except it was known as E_NOTIMPL.

;;;;;; ;;;;;;

There is a valid use case for it. If you are working on a particular method of an interface you want you code to compile so you can debug and test it. According to your logic you would need to remove the method off the interface and comment out non-compiling stub code. This is a very fundamentalist approach and whilst it has merit, not everyone will or should adhere to that. Besides, most of the time you want the interface to be complete.

;;;;;; ;;;;;;

Having a NotImplementedException nicely identifies which methods are not ready yet, at the end of the day it's as easy as pressing Ctrl+Shift+F to find them all, I am also sure that static code analysis tools will pick it up too.

;;;;;; ;;;;;;

You are not meant to ship code that has NotImplementedException exception. If you think that by not using it you can make your code better, go forth, but there are more productive things you can do to improve the source quality.

;;;;;; " 410719"",""Test"",""

Two reasons:

";;;;;; ;;;;;;
    ;;;;;;
  1. Methods are stubbed out during development, and throw the exception to remind the developers that their code writing is not finished.

  2. ;;;;;;
  3. Implementing a subclass interface that, by design, does not implement one or more methods of the inherited base class or interface. (Some interfaces are just too general.)

  4. ;;;;;;
;;;;;; " 410719"",""Test"",""

Why do you feel the need to catch every possible exception? Do you wrap every method call with catch (NullReferenceException ex) too?

";;;;;; ;;;;;;

Stub code throwing NotImplementedException is a placeholder, if it makes it to release it should be bug just like NullReferenceException.

;;;;;; " 410719"",""Test"",""

I think there are many reasons why MS added NotImplementedException to the framework:

";;;;;; ;;;;;;
    ;;;;;;
  • As a convenience; since many developers will need it during development, why should everybody have to roll their own?
  • ;;;;;
  • So that tools can rely on its presence;" for example, Visual Studio's """"Implement Interface"""" command generate method stubs that throw NotImplementedException. If it were not in the framework, this would not be possible, or at least rather awkward (for example, it could generate code that doesn't compile until you add your own NotImplementedException)
  • ";;;;; "
  • To encourage a consistent """"standard practice""""
  • ";;;;;;
;;;;;; ;;;;;;

Frankodwyer thinks of NotImplementedException as a potential timebomb. I would say that any unfinished code is a timebomb, but NotImplementedException is much easier to disarm than the alternatives. For example, you could have your build server scan the source code for all uses of this class, and report them as warnings. If you want to be really ban it, you could even add a pre-commit hook to your source-control system that prevents checkin of such code.

;;;;;; ;;;;;;

Sure, if you roll your own NotImplementedException, you can remove it from the final build to make sure that no time bombs are left. But this will only work if you use your own implementation consistently in the entire team, and you must make sure that you don't forget to remove it before you release. Also, you might find that you can't remove it; maybe there are a few acceptable uses, for example in testing code that is not shipped to customers.

;;;;; " 410719"",""Test"",""

There is really no reason to actually catch a NotImplementedException. When hit, it should kill your app, and do so very painfully. The only way to fix it is not by catching it, but changing your source code (either implementing the called method, or changing the calling code).

";;;;;; " 410719"",""Test"",""

Most developers at Microsoft are familiar with design patterns in which a NotImplementedException is appropriate. It's fairly common actually.

";;;;;; ;;;;;; "

A good example is a Composite Pattern, where many objects can be treated as a single instance of an object. A component is used as a base abstract class for (properly) inherited leaf classes. For example, a File and Directory class may inherit from the same abstract base class, because they are very similar types. This way, they can be treated as a single object (which makes sense when you think about what files and directories are - in Unix for example, everything is a file).

";;;;;; ;;;;;;

So in this example, there would be a GetFiles() method for the Directory class, however, the File class would not implement this method, because it doesn't make sense to do so. Instead, you get a NotImplementedException , because a File does not have children the way a Directory does.

;;;;;; ;;;;;;

Note that this is not limited to .NET - you'll come across this pattern in many OO languages and platforms.

;;;;;; " 410719"",""Test"",""

Re NotImplementedException - this serves a few uses";" it provides a single exception that (for example) your unit tests can lock onto for incomplete work. But also, it really does do what is says: this simply isn't there (yet). For example, """"mono"""" throws this all over the place for methods that exist in the MS libs, but haven't been written yet.

";;;;; ;;;;;; "

Re NotSupportedException - not everything is available. For example, many interfaces support a pair """"can you do this?"""" / """"do this"""". If the """"can you do this?"""" returns false, it is perfectly reasonable for the """"do this"""" to throw NotSupportedException. Examples might be IBindingList.SupportsSearching / IBindingList.Find() etc.

";;;;;; " 410719"",""Test"",""

The main use for a NotImplementedException exception is in generated stub code: that way you don't forget to implement it!! For example, Visual Studio will explicitly implement an interface's methods/properties with the body throwing a NotImplementedException.

";;;;;; " 410719"",""Test"",""
";;;;;;

Why does the NotImplementedException;;;;;; exist?

;;;;;;
;;;;;; ;;;;;; "

NotImplementedException is a great way to say that something is not ready yet. Why it's not ready is a separate question for method's authors. In production code you're unlikely to catch this exception, but if you did you can immediately see what happened and it's much better than trying to figure out why methods was called but nothing happened or even worse - receive some """"temporary"""" result and get """"funny"""" side effects.

";;;;;; ;;;;;;
;;;;;;

Is NotImplementedException the C#;;;;;; equivalent of Java's;;;;;; UnsupportedOperationException?

;;;;;;
;;;;;; ;;;;;;

No, .NET has NotSupportedException

;;;;;; ;;;;;;
;;;;;;

I have to run my algorithm, catch;;;;;; NotImplementedExceptions and the some;;;;;; how roll back my application to some;;;;;; sane state

;;;;;;
;;;;;; ;;;;;;

Good API has XML methods documentation that describes possible exceptions.

;;;;;; ;;;;;;
;;;;;;

I'm very suspicious of the;;;;;; NotSupportedException also... Not;;;;;; supported? What the? If it's not;;;;;; supported, why is it part of your;;;;;; interface?

;;;;;;
;;;;;; ;;;;;;

There can be millions reasons. For example you can introduce new version of API and don't want to/can't support old methods. Again, it is much better to see descriptive exception rather then digging into documentation or debugging 3rd party code.

;;;;;; " 410719"",""Test"",""

If you don't want to use it then just ignore it. If you have a block of code whose success depends on every piece of it succeeding, but it might fail in between, then your only option is to catch the base Exception and roll back what needs to be rolled back. Forget NotImplementedException. There could be tons of exceptions thrown, like MyRandomException and GtfoException and OmgLolException. After you originally write the code I could come by and throw ANOTHER exception type from the API you're calling. One that didn't exist when you wrote your code. Handle the ones you know how to handle and rollback for any others i.e., catch(Exception). It's pretty simple, I think... I find it comes in handy too. Especially when you're trying fancy things with the language/framework that occasionally force things upon you.

";;;;;; ;;;;;; "

One example I have is serialization. I have added properties in my .NET library that don't exist in the database (for example, convenient wrappers over existing """"dumb"""" properties, like a FullName property that combines FirstName, MiddleName, and LastName). Then I want to serialize these data types to XML to send them over the wire (for example, from an ASP.NET application to JavaScript), but the serialization framework only serializes public properties with both get and set accessors. I don't want you to be able to set FullName because then I'd have to parse it out and there might be some unforeseen format that I wrongly parse and data integrity goes out the window. It's easier to just use the underlying properties for that, but since the language and API require me to have a set accessor, I'll throw a NotImplementedException (I wasn't aware of NotSupportedException until I read this thread, but either one works) so that if some programmer down the road does try to set FullName he'll encounter the exception during testing and realize his mistake.

";;;;;; " 410719"",""Test"",""<.net>""";;;;;; "410719,""Test"",""Why does NotImplementedException exist?""";;;;;; "738067,""Test"",""

If you are using agile, the idea is to always be doing incremental refactoring and never build up large technical debt. that being said, if you have an agile team that is taking over software that has a decent amount of technical debt, you have to fit it in somewhere.

";;;;;; ;;;;;;

Do you go and create developer user stories . .for example .

;;;;;; ;;;;;;
    ;;;;;;
  • As a developer, i have 50% test coverage over the business logic module so i have confidence in delivery
  • ;;;;;;
  • As a developer, the application supports dependency injection so we can swap out concretions and be more agile in the future.
  • ;;;;;;
;;;;;; ;;;;;;

or is there another best practice for cleaning up this code technical debt

;;;;;; " 738067"",""Test"",""

I think it's a good idea to ask how much longer the customer(s) expect to be using the application. If the application's lifespan is limited (say, three years or less) then it may not make sense to put much effort into refactoring. If the lifespan is expected (or hoped) to be longer, then the payback for refactoring becomes that much more attractive.

";;;;;; ;;;;;;

You might also want to try creating a business case for the investment in refactoring. Show specific examples of the kinds of improvements that you would want to make. Make an honest assessment of the costs, risks, and expected payback. Try to find a specific refactoring that you could implement independently of the others, and lobby for approval to make that change as a test run of the refactoring process.

;;;;;; ;;;;;; "

Note that, when you talk about payback, you may be expected to provide specific numbers. It's not enough to say """"it will be much easier to fix bugs."""" Instead, you should be prepared to say something like """"We'll see a minimum 30% improvement in turnaround time for bug fixes"""", or """"We will experience 40% fewer regressions."""" You should also be prepared to negotiate with management and/or customers so that you all agree that you have measurements that are meaningful to them, and to provide measurements from before and after the refactoring.

";;;;;; " 738067"",""Test"",""

Reducing technical debt is something everyone should do, each time we submit code.

";;;;;; ;;;;;;

When you edit code, you tidy up a bit, like scouts before leaving a camping ground.

;;;;;; ;;;;;;

This way, code that changes often will be in better shape, which is good for business.

;;;;;; ;;;;;;

Code that never change won't improve, but then again why should it, if it works?

;;;;;; ;;;;;;

Don't schedule tasks for this, although a long-term plan is helpful, as is a forum to discuss issues.

;;;;;; ;;;;;;

Very large projects would benefit from some kind of locking scheme so that two coders don't refactor the same piece of code simultaneously without synchronizing.

;;;;;; ;;;;;;

/Roger

;;;;;; " 738067"",""Test"",""

I work in an Agile environment, but where the current codebase had existed for several years before the agile techniques were adopted. This leads to having to try to work in an agile way, around code that was not written with automatic regression testing in mind.

";;;;;; ;;;;;;

Because the technical debt affects how quickly we can deliver new features, we record how much time was added due to working with the legacy code. This data allows us to make a case for time dedicated to paying off technical debt. So when the customer (be it manager, or CTO or whoever) thinks that estimates are too high you have data which can reinforce your position.

;;;;;; ;;;;;;

Of course occasionally, you find your estimates go over because of unexpected quirks of the legacy code where you had to pay off technical debt. We have found that as long as the extra time can be explained and accounted for, and a case can be made for the benefits of the extra time spent, it's generally accepted pretty well.

;;;;;; ;;;;;;

Of course, YMMV dependent on customer or other factors, but having statistics which represent the effect of technical debt going forward is very useful.

;;;;;; " 738067"",""Test"",""

There should be a distinction between an engineering practice and technical debt. I view test driven development and automated testing as practices.

";;;;;; ;;;;;;

Having taken code assets that were built by waterfall teams, the assets did not have automated unit, functional or performance tests. When we assumed responsibility for the software asset, we trained the product owner in Agile and told them of the practices we would use.

;;;;;; ;;;;;;

Once we begin using the practices, we begin to identify technical debt. As technical debt was identified, technical story cards were written and placed on the product backlog by the product owner. The developer and Testrs estimated all work using the XP engineering practices (TDD, automated testing, pair programming etc.). Those practices identified fragility in the code via TDD, automated function and performance tests. In particular, a significant performance issue was identified via automated performance testing and profiling. The debt was so large that we estimated the fix to take 6 iterations. we informed the product owner that if new features were developed they would not be able to be used by the user base given the poor performance of the application. Given that we had to scale the app from a few hundred users to 10s of thousands of users, the product owner prioritzed the performance technical debt very high and we completed the technical cards in the iterations estimated.

;;;;;; ;;;;;;

Note: technical debt that can be fixed via refactoring within the estimate of a story card does not require a technical story card. Larger technical debt will. For technical debt that will require a technical card, identify the business impact and ask the product owner to prioritize the technical card. Then work the card. Don't create technical debt for engineering practices. Do all estimating knowing that the engineering practices will be part of the estimate. Do not create a card to retrofit the application with automated unit, functional and performance test. Instead, include the work only in the cards you are estimating and add the automated test to the code you touch via the cards being worked. This will enable the app to improve over time without bringing progress to a halt. Stopping the addition of all business cards should only be saved for the most drastic situation such as inability of the application to perform or scale.

;;;;;; ;;;;;;

Given the case where you inherit a code base without automated unit, functional and performance test, inform the business partner of the sad state of affairs. Let them know how you will estimate the work. Create technical debt as it is uncovered via the engineering practice. Finally, informed the product owner that the team's velocity will improve as more and more of the code base is touched with automated unit, functional and performance tests.

;;;;;; " 738067"",""Test"",""

Is your application internal or do you have an external customer? If a client is paying for your work on and support of the application, it may be difficult to get them to sign off on cards like the ones you suggest.

";;;;;; ;;;;;; "

Also, with your second card idea, it might be hard to say what """"Done"""" is.

";;;;;; ;;;;;;

A specific approach to your issue could be Defect Driven Testing - the idea is that when you get a bug report and estimate the card that says to fix it, see what test(s) you can add in at the same time that are similar but increase coverage.

;;;;;; ;;;;;; "

And you don't specifically ask for technical details about how to get your project under test, but this book is very helpful once you start actually doing it:Working Effectively with Legacy Code

";;;;; " 738067"",""Test"",""""";;;;;; "738067,""Test"",""Paying off technical debt in Agile""";;;;;; "798243,""Test"",""

Are there objective metrics for measuring code refactoring?

";;;;;; ;;;;;;

Would running findbugs, CRAP or checkstyle before and after a refactoring be a useful way of checking whether the code was actually improved rather than just changed?

;;;;;; ;;;;;;

I'm looking for metrics that can be can determined and Testd for to help improve the code review process.

;;;;;; " 798243"",""Test"",""

I see the question from the smell point of view. Smells could be treated as indicators of quality problems and hence, the volume of identified smell instances could reveal the software code quality.

";;;;;; ;;;;;;

Smells can be classified based on their granularity and their potential impact. For instance, there could be implementation smells, design smells, and architectural smells. You need to identify smells at all granularity levels before and after to show the gain from a refactoring exercise. In fact, refactoring could be guided by identified smells.

;;;;;; ;;;;;;

Examples:

;;;;;; ;;;;;;
    ;;;;;;
  • Implementation smells: Long method, Complex conditional, Missing default case, Complex method, Long statement, and Magic numbers.
  • ;;;;;; "
  • Design smells: Multifaceted abstraction, Missing abstraction, Deficient encapsulation, Unexploited encapsulation, Hub-like modularization, Cyclically-dependent modularization, Wide hierarchy, and Broken hierarchy. More information about design smells can be found in this book.
  • ";;;;;; "
  • Architecture smells: Missing layer, Cyclical dependency in packages, Violated layer, Ambiguous Interfaces, and Scattered Parasitic Functionality. Find more information about architecture smells here.
  • ";;;;;;
;;;;;; " 798243"",""Test"",""

There are two outcomes you want from refactoring. You want to team to maintain sustainable pace and you want zero defects in production.

";;;;;; ;;;;;;

Refactoring takes place on the code and the unit test build during Test Driven Development (TDD). Refactoring can be small and completed on a piece of code necessary to finish a story card. Or, refactoring can be large and required a technical story card to address technical debt. The story card can be placed on the product backlog and prioritized with the business partner.

;;;;;; ;;;;;;

Furthermore, as you write unit tests as you do TDD, you will continue to refactor the test as the code is developed.

;;;;;; ;;;;;;

Remember, in agile, the management practices as defined in SCRUM will provide you collaboration and ensure you understand the needs of the business partner and the code you have developed meets the business need. However, without proper engineering practices (as defined by Extreme Programming) you project will loss sustainable pace. Many agile project that did not employ engineering practices were in need of rescue. On the other hand, a team that was disciplined and employed both management and engineering agile practice were able to sustain delivery indefinitely.

;;;;;; ;;;;;;

So, if your code is released with many defects or your team looses velocity, refactoring, and other engineering practices (TDD, paring, automated testing, simple evolutionary design etc), are not being properly employed.

;;;;;; " 798243"",""Test"",""

No matter what you do just make sure this metric thing is not used for evaluating programmer performance, deciding promotion or anything like that.

";;;;;; " 798243"",""Test"",""

I would stay away from metrics for measuring refactoring success (aside from #unit test failures == 0). Instead, I'd go with code reviews.

";;;;;; ;;;;;; "

It doesn't take much work to find obvious targets for refactoring: """"Haven't I seen that exact same code before?"""" For the rest, you should create certain guidelines around what not to do, and make sure your developers know about them. Then they'll be able to find places where the other developer didn't follow the standards.

";;;;;; ;;;;;;

For higher-level refactorings, the more senior developers and architects will need to look at code in terms of where they see the code base moving. For instance, it may be perfectly reasonable for the code to have a static structure today; but if they know or suspect that a more dynamic structure will be required, they may suggest using a factory method instead of using new, or extracting an interface from a class because they know there will be another implementation in the next release.

;;;;; ;;;;;;

None of these things would benefit from metrics.

;;;;;; " 798243"",""Test"",""

Yes, several measures of code quality can tell you if a refactoring improves the quality of your code.

";;;;;; ;;;;;;
    ;;;;;;
  • Duplication. In general, less duplication is better. However, duplication finders that I've used sometimes identify duplicated blocks that are merely structurally similar but have nothing to do with one another semantically and so should not be deduplicated. Be prepared to suppress or ignore those false positives.

  • ;;;;;;
  • Code coverage. This is by far my favorite metric in general, but it's only indirectly related to refactoring. You can and should raise low coverage by writing more tests, but that's not refactoring. However, you should monitor code coverage while refactoring (as with any other change to the code) to be sure it doesn't go down. Refactoring can improve code coverage by removing unTestd copies of duplicated code.

  • ;;;;;; "
  • Size metrics such as lines of code, total and per class, method, function, etc. A Jeff Atwood post lists a few more. If a refactoring reduces lines of code while maintaining clarity, quality has increased. Unusually long classes, methods, etc. are likely to be good targets for refactoring. Be prepared to use judgement in deciding when a class, method, etc. really does need to be longer than usual to get its job done.

  • ";;;;;;
  • Complexity metrics such as cyclomatic complexity. Refactoring should try to decrease complexity and not increase it without a well thought out reason. Methods/functions with high complexity are good refactoring targets.

  • ;;;;;; "
  • Robert C. Martin's package-design metrics: Abstractness, Instability and Distance from the abstractness-instability main sequence. He described them in his article on Stability in C++ Report and his book Agile Software Development, Principles, Patterns, and Practices. JDepend is one tool that measures them. Refactoring that improves package design should minimize D.

  • ";;;;
;;;;;; ;;;;;;

I have used and continue to use all of these to monitor the quality of my software projects.

;;;;;; " 798243"",""Test"",""

Code size. Anything that reduces it without breaking functionality is an improvement in my book (removing comments and shortening identifiers would not count, of course)

";;;;;; " 798243"",""Test"",""
";;;;;;

Would running findbugs, CRAP or checkstyle before and after a refactoring be a useful way of checking if the code was actually improved rather than just changed?

;;;;;;
;;;;;; ;;;;;; "

Actually, as I have detailed in the question """"What is the fascination with code metrics?"""", the trend of any metrics (findbugs, CRAP, whatever) is the true added value of metrics.
";;;;;; It (the evolution of metrics) allows you to prioritize the main fixing action you really need to make to your code (as opposed to blindly try to respect every metric out there)

;;;;;; ;;;;;; "

A tool like Sonar can, in this domain (monitoring of metrics) can be very useful.

";;;;;; ;;;;;;
;;;;;; ;;;;;;

Sal adds in the comments:

;;;;;; ;;;;;;
;;;;;;

The real issue is on checking what code changes add value rather than just adding change

;;;;;;
;;;;;; ;;;;;; "

For that, test coverage is very important, because only tests (unit tests, but also larger """"functional tests"""") will give you a valid answer.
";;;;;; "But refactoring should not be done without a clear objective anyway. To do it only because it would be """"more elegant"""" or even """"easier to maintain"""" may be not in itself a good reason enough to change the code.
";;;;;; "There should be other measures like some bugs which will be fixed in the process, or some new functions which will be implemented much faster as a result of the """"refactored"""" code.
";;;;;; In short, the added value of a refactoring is not solely measured with metrics, but should also be evaluated against objectives and/or milestones.

;;;;;; " 798243"",""Test"",""

Depending on your specific goals, metrics like cyclomatic complexity can provide an indicator for success. In the end every metric can be subverted, since they cannot capture intelligence and/or common sense.

";;;;;; ;;;;;;

A healthy code review process might do wonders though.

;;;;;; " 798243"",""Test"",""

Number of failed unittests must be less or equal to zero :)

";;;;;; " 798243"",""Test"",""""";;;;;; "798243,""Test"",""Metrics for measuring successful refactoring""";;;;;; "903573,""Test"",""

What I mean by this, is that sometimes architects look to simplify and improve testability, at the expense of other important forces.

";;;;;; ;;;;;;

For example, I'm reviewing a very complicated application, made so by extensive use of design patterns that overly favor testing, e.g. IoC, DI, AOP, etc...
;;;;;; Now, typically I like these things, but this system should have been much simpler - though not just a simple web frontend for CRUD on a db, it still not MUCH more complicated than that (even considering some internal workflows, processes, etc). On the other hand, just reviewing the code becomes a major pain in the heinie, barely readable (even though its well written), and coding it must have been a pain.

;;;;;; ;;;;;; "

The implemented complexity is a clear violator of KISS (the principle, NOT the band)... and the """"only"""" benefit is improved testability, using testing frameworks and mocks and and...

";;;;;; ;;;;;;

Now, before you TDD fans jump me, I'm not belittling the importance of testability, but I'm questioning the supremacy of consideration of this specific force (against all the others).
;;;;;; Or did I miss something?

;;;;;; ;;;;;;
;;;;;; ;;;;;; "

I'd like to add another point - it does seem to me that all this talk of """"testability"""" is with regards specifically to unit testing, which differs from overall system testing, and can result in missed tests when the individual units are integrated together. At least, that seems the point of the IoC/DI for testing...
";;;;;; Also, I'd point out that this system (and others I've seen preached) only have a single concrete object per interface, and the IoC/DI is only intended for - you guessed it - replacing the concrete objects with testing mockups for testing only.

;;;;;; ;;;;;;
;;;;;; ;;;;;; "

I felt the need to add this quote from Wikipedia on IoC:

";;;;;; ;;;;;;
;;;;;;

Whereas the danger in procedural programming was to end with spaghetti code, the danger when using Inversion of Control is ending with macaroni code

;;;;;;
;;;;;; ;;;;;;

Yup, that expresses my feeling exactly :D

;;;;;; " 903573"",""Test"",""

I have no idea what you mean by it being barely readable, as, even when using AOP and DI each part should be easy to understand. Understanding the whole may be more complicated, due to these technologies, but that is more a matter of being able to explain, either with models or text, how the application works.

";;;;;; ;;;;;;

I am currently working on an application where there is not a single unit test, so, now I am starting to introduce DI to help make testing simpler, but, it will make it harder for the other developers to understand the system, since different concrete classes can be plugged in, and you won't know which one is until you look at the app.config file.

;;;;;; ;;;;;;

This could lead to them thinking the code is unreadable because they can't just flow from one function level to another easily, but have to make a side trip to see which concrete class to use.

;;;;;; ;;;;;;

But, in the long run this will be a more flexible and stable system, so I think it is worth the fact that some training will be involved. :)

;;;;;; ;;;;;;

You may just need to see about getting a better system model for the application, to see how everything is tied together.

;;;;;; " 903573"",""Test"",""
";;;;;;

I'm reviewing a very complicated;;;;;; application, made so by extensive use;;;;;; of design patterns that overly favor;;;;;; testing, e.g. IoC, DI, AOP, etc...

;;;;;;
;;;;;; ;;;;;; "

In this case testing is not the problem, its the design patterns and the overall architecture that's at fault, something commonly criticised by Joel and Jeff in discussions against Architecture Astronauts. Here, we have something that has been decided based on 'wow cool architecture', and if 1 design pattern is good, 2 must be great and 3 must be fantastic - lets see how many patterns we can create this app out of.

";;;;;; ;;;;;;

If testing is possibly essential to make those patterns work reliably (hmm, says something about them really), but you shouldn't confused testing being good with some architectural designs being poor.

;;;;;; ;;;;;;

So, no, feel free to focus on testing without worry - eg Extreme Programming is a very simple development methodology that focuses on testing, if you'd written your app in such a freeform way you might not have gotten into your mess, that you have is not the fault of test driven development, but the design choices that were made.

;;;;;; ;;;;;;

If you can start scrapping it, do so - maintainability is the most important factor in software, if it isn't easy to modify then you could seal it and start over as it will probably cost you more to maintain it.

;;;;;; " 903573"",""Test"",""

Benefit of this approach will come back, IF app will grow large enough. Otherwise - it's just a waste of time. Sometimes even drag&";drop 'coding' and following SmartUI pattern is satisfying enough.

;;;;; " 903573"",""Test"",""

For better or worse TDD has helped me break down my applications into more manageable components where my ability to test items in isolation has forced me to keep things concise. The tests have also served as a good source of documentation when I introduce others to my code. Going through the tests can be a good way to review the workings of an application in where things are isolated sufficiently so you can wrap your head around the functional parts. Another nice by product is that when you have employed a design pattern in an application, the tests have a similarity to other applications where you have used that pattern.

";;;;;; ;;;;;;

All that said, it would be really silly to implement let's say the Command Pattern and only have two commands when you know that the app will only ever execute two functions. Now you have saddled yourself with writing a bunch of tests. What was gained? You can always test public methods, but with a pattern in place you have complexity to deal with, and have incurred technical debt with all the additional test you have to maintain.

;;;;;; ;;;;;;

Another factor to take into consideration is what level architecture your team can support. Are all team members at the same level of understanding of TDD, or will there be a minority of people who can understand the tests? Will seeing a mock object make someone's eyes glaze over, and does that alone become the prohibitive factor for not completing maintenance in a timely manner?

;;;;;; ;;;;;; "

In the end, the scope of application needs to drive the design as well. Complexity for the sake of being """"pure"""" is not good judgment. TDD does not cause this"; rather, a lack of experience can.

;;;;; " 903573"",""Test"",""

From the description it sounds like the project lost track of YANGI, developing large structures so that testing could be done if needed.

";;;;;; ;;;;;;

In TDD everything is justified by a test, so the fact that you have all of this IoC, DI, AOP was either required as the simplest solution to make the existing tests pass or (much more likely) an over-enginiered solution to keeping the code testable.

;;;;;; ;;;;;;

One mistake I have seen that leads to this kind of complexity is the desire to have the testing follow the design, rather than the other way around. What can happen is that the desire to keep to a certain hard-to-test design leads to the introduction of all kinds of workarounds to open the API rather than developing a simpler, easier to test API.

;;;;;; " 903573"",""Test"",""

(This is written entirely from a programmer's perspective. For a more customer-facing answer, I'd recommend Michael Bolton's reply.)

";;;;;; ;;;;;;

If the app you are writing is <10 Lines of code, then yes, adding tests increases the complexity massively. You can LOOK AT IT and test it manually and you'll probably be fine. At 100 Lines, not so much, 1,000 lines, not so much, 10,000 lines, 100,000 lines ... etc.

;;;;; ;;;;;;

A second axis is change. Will this cove base /ever/ change? By how much? The more the code will change, the more valuable tests will be.

;;;;;; ;;;;;;

So, yes, for a 150-Line-of-code app that is an edi-format-to-edi-format conversion script that runs in batch mode that's never going to change, heavy unit-testing might be over kill.

;;;;;; ;;;;;;

Generally, for large apps, I've found that changing the code to be testable improves the quality of the design and the API. So if you are writing something much larger or that will be developed iteratively and think (automated) unit testing has high cost/low value, I'd take a serious look at why you believe that to be the case.

;;;;;; ;;;;;;

One explanation is that your boss has pattern addiction. Another might be that you see patterns and testing as a yes/no all-or-nothing discussion. A third is that the code is already written and it's the re-write to-be-testable that you are dreading. If any of those are the case, I would suggest surgical approach - focus on a few high bang-for-the-buck tests that add value very quickly. Grow your test suite slowly, as the code progresses. Refactor to patterns when you see value and simplicity - not complexity.

;;;;;; " 903573"",""Test"",""

""""Or did I miss something?""""

";;;;;; ;;;;;; "

There's an implied direct relationship in the question between how testable code is and how complex code is. If that's been your experience I'd say you're doing it wrong.

";;;;;; ;;;;;;

Code doesn't have to be more complicated to be more testable. Refactoring code to be more testable does tend towards code being more flexible and in smaller pieces. This doesn't necessarily mean more complex (which is already a loaded term) or that there needs to be action-at-a-distance.

;;;;;; ;;;;;;

Not knowing the details, I can only give generic advice. Check that you're not just using pattern-of-the-week. If you have a method which requires a lot of setup or complicated ways to override its behavior often there's a series of simpler, deterministic methods inside. Extract those methods and then you can more easily unit test them.

;;;;;; ;;;;;;

Tests don't have to be as clean and well designed as the code its testing. Often its better to do what would normally be a nasty hack in a test rather than do a whole lot of redesign on the code. This is particularly nice for failure testing. Need to simulate a database connection failure? Briefly replace the connect() method with one that always fails. Need to know what happens when the disk fills up? Replace the file open method with one that fails. Some languages support this technique well (Ruby, Perl), others not so much. What is normally horrible style becomes a powerful testing technique which is transparent to your production code.

;;;;;; ;;;;;;

One thing I will definitively say is to never put code in production which is only useful for testing. Anything like if( TESTING ) { .... } is right out. It just clutters up the code.

;;;;;; " 903573"",""Test"",""

A testable product is one that affords the opportunity to answer questions about it. Testability, like quality, is multidimensional and subjective. When we evaluate a product as (not) testable, it's important to recognize that testability to some person may be added or unnecessary complexity to someone else.

";;;;;; ;;;;;;

A product that has lots of unit tests may be wonderfully testable to the programmers, but if there are no hooks for automation, the product may be hard to test for a testing toolsmith. Yet the very same product, if it has a clean workflow, an elegant user interface, and logging, may be wonderfully testable by an interactive black-box Testr. A product with no unit tests whatsoever may be so cleanly and clearly written that it's highly amenable to inspection and review, which is another form of testing.

;;;;;; ;;;;;; "

I talk about testability here. James Bach talks about it here.

";;;;;; ;;;;;;

---Michael B.

;;;;;; " 903573"",""Test"",""

In my view, given a sufficiently large or important piece of software, adding some complexity to improve testability is worth it. Also, in my experience, the places where the complexity is difficult to understand, is when abstraction layers are added to wrap around a piece of code that is inherently untestable on it's own (like sealed framework classes). When code is written from the perspective of testability as a first principle, I've found that the code is, in fact, easy to read and no more complex than necessary.

";;;;;; ;;;;;; "

I'm actually pretty resistant to adding complexity where I can avoid it. I've yet to move to a DI/IoC framework, for example, preferring to hand-inject dependencies only where needed for testing. On the other hand, where I have finally adopted a practice that """"increases"""" complexity -- like mocking frameworks -- I've found that the amount of complexity is actually less than I feared and the benefit more than I imagined. Perhaps, I'll eventually find this to be true for DI/IoC frameworks as well, but I probably won't go there until I have a small enough project to experiment on without delaying it unreasonably by learning new stuff.

";;;;;; " 903573"",""Test"",""

I've seen first hand web sites that passed all unit test, passed all automated interface tests, passed load tests, passed just about every test, but clearly and obviously, had issues when viewed by a human.

";;;;;; ;;;;;; "

That lead to code analysis, which discovered memory leaks, caching issues, bad code, and design flaws. How did this happen when more than one testing methodology was followed and all tests passed? None of the """"units"""" had memory leaks or caching issues, only the system as a whole.

";;;;;; ;;;;;; "

Personally I believe it's because everything was written and designed to pass tests, not to be elegant, simple and flexible in design. There is a lot of value in testing. But just because code passes a test, doesn't mean it's good code. It means it's """"book smart"""" code, not """"street smart"""" code.

";;;;;; " 903573"",""Test"",""

""""did I miss something?""""

";;;;;; ;;;;;;

Yes.

;;;;;; ;;;;;;

The thing works, does it not?

;;;;;; ;;;;;;

And, more importantly, you can demonstrate that it works.

;;;;;; ;;;;;;

The relative degree of complexity added for testability isn't very interesting when compared with the fact that it actually works and you can demonstrate that it actually works. Further, you can make changes and demonstrate that you didn't break it.

;;;;;; ;;;;;;

The alternatives (may or may not work, no possibility of demonstrating if it works, can't make a change without breaking it) reduces the value of the software to zero.

;;;;;; ;;;;;;
;;;;;; ;;;;;;

Edit

;;;;;; ;;;;;; "

""""Complexity"""" is a slippery concept. There are objective measures of complexity. What's more important is the value created by an increase in complexity. Increasing complexity gives you testability, configurability, late binding, flexibility, and adaptability.

";;;;;; ;;;;;;

Also, the objective measure of complexity are usually focused on coding within a method, not larger complexity of the relationships among classes and objects. Complexity seems objective, but it isn't defined at all layers of the software architecture.

;;;;;; ;;;;;; "

""""Testability"""" is also slippery. There may be objective measures of testability. Mostly, however, these devolve to test coverage. And test coverage isn't a very meaningful metric. How does the possibility of a production crash vary with test coverage? It doesn't.

";;;;;; ;;;;;;

You can blame complexity on a focus on testability. You can blame complexity on a lot of things. If you look closely at highly testable code, you'll find that it's also highly flexible, configurable and adaptable.

;;;;;; ;;;;;; "

Singling out """"testability"""" as the root cause of """"complexity"""" misses the point.

";;;;;; ;;;;;; "

The point is that there are numerous interrelated quality factors. """"It Works"""" is a way of summarizing the most important ones. Other, less important ones, include adaptability, flexibility, maintainability. These additional factors usually correlate with testability, and they can also be described negatively as """"complexity"""".

";;;;;; " 903573"",""Test"",""

To answer your general question, I'd say """"everything in moderation"""". An emphasis on testability is of course a great thing. But not when it comes at the cost of excluding, say, readable code or a logical API.

";;;;;; " 903573"",""Test"",""

TDD done well can improve readability. TDD done poorly, that is without consideration of other important principles, can reduce readability.

";;;;;; ;;;;;; "

A guy I worked with in the mid-90s would say """"You can always make a system more flexible by adding a layer of indirection. You can always make a system simpler by removing a layer of indirection."""" Both flexibility and simplicity are important qualities of a system. The two principles can often live together in harmony, but often they work against each other. If you go too far towards one extreme or the other, you move away from the ideal that exists where these two principles are balanced.

";;;;;; ;;;;;;

TDD is partly about testing, partly about design. TDD done poorly can tend too much towards either flexibility or simplicity. It can push towards too much flexibility. The objects become more testable, and often simpler, but the inherent complexity of the domain problem then is pushed out of the objects into the interaction of the objects. We gained flexibility, and to the naive eye, it can look as though we've gained simplicity because our objects are simpler. The complexity, however, is still there. It's moved out of the objects, and into the object interaction, where it's harder to control. There are code smells that can act as red flags here - a system with hundreds of small objects and no larger objects is one, lots of objects with only one-line methods is another.

;;;;;; ;;;;;;

TDD done poorly can move in the other direction as well, that is, towards too much simplicity. So, we do TDD by writing the test first, but it has little impact on our design. We still have long methods and huge objects, and those are code smells that can red-flag this problem.

;;;;;; ;;;;;;

Now TDD will not by its nature knock you off-balance in either direction, provided it's well-applied. Use other practices to keep you on track. For example, draw pictures of what you're doing before you do it. Obviously, not all the time. Some things are far too simple for that. Some pictures are worth saving, some are just sketches that help us to visualize the problem, and we are, by varying degrees, mostly visual learners. If you can't draw a picture of the problem, you don't understand it.

;;;;;; ;;;;;;

How will this help with TDD? It will help to keep a system from going too far on the flexibility side, away from the simplicity side. If you draw a picture and it's ugly, that's a red flag. Sometimes it's necessary, but often when you draw the picture, your mind will quickly see things that can be simplified. The solution becomes more elegant and simplified, easier to maintain, and more enjoyable to work on. If you can't or won't draw pictures of your system, you're losing this opportunity to make your software more solid, more elegant, more beautiful to see and easier to maintain.

;;;;;; ;;;;;;

Applying this comes with experience, and some coders will never understand the value that a good balance provides. There's no metric that you can run that tells you you're in the right place. If someone gives you a prescribed method to arrive at that harmonious point, he's lying to you. More importantly, he's probably lying to himself without realizing it.

;;;;;; ;;;;;;

So, my answer to your question is 'yes': test everything without forgetting the other good principles.

;;;;;; ;;;;;;

Any good practice will throw you off-course if it's not balanced with other good practices.

;;;;;; " 903573"",""Test"",""""";;;;;; "903573,""Test"",""Is too much focus on testing benefits a bad thing overall?""";;;;;; "1433741,""Test"",""

I'm trying to build a plan on how we could spend more time refactoring. So I wanted to compare with the industry standards but I have hard time to find studies or metrics on that.

";;;;;; ;;;;;;

I feel that 20% of dev time spent on refactoring seems a good ratio, but I don't have anything to show for it.

;;;;;; ;;;;;;

In my mind, for 100% or dev time:

;;;;;; ;;;;;;
    ;;;;;;
  • 50% is spent writing code, debugging, etc...
  • ;;;;;;
  • 30% is spent writing unit-tests
  • ;;;;;;
  • 20% is spent refactoring code
  • ;;;;;;
;;;;;; ;;;;;;

So around 1 line of code for 2 written end up being in the shipped product.;;;;;; Obviously design time, documentation time, etc is integrated in these percentages.

;;;;;; ;;;;;;

What is an industry standard? As a rule of thumb, what is your team using?;;;;;; Thanks,;;;;;; Olivier

;;;;;; " 1433741"",""Test"",""

I'd think such a ratio would vary widely depending on the people, the project, the tools, and likely other things. Typically, however, this would be accounted for under debugging and/or testing, as in a new project it would typically be part of dealing with problems discovered later. There would also be some going on in the initial code writing.

";;;;;; " 1433741"",""Test"",""

For good, well designed code, 5% or less sounds about right to me for ongoing development.

";;;;;; ;;;;;;

For problematic code that needs serious redesign, you might have to budget a much higher percentage for up-front refactoring, before adding new features or fixing serious bugs.

;;;;;; " 1433741"",""Test"",""

This is depending of many factors like the type of business you are in, the type of process your company use, the type of team you are in, what language you are using, etc.

";;;;;; ;;;;;;

They are absolutely no right answer. If you are in a field that is moving a lot in the requirement and if the client change a lot and you use an agile method you will be more susceptible of refactoring. If you are in a Bank and you have a team that use a cascade approach you are more prone to write code more and less refactoring.

;;;;;; " 1433741"",""Test"",""

If you have a difficult time understanding your code, then it's time to refactor.

";;;;;; ;;;;;;

Otherwise, you risk wasting time debugging due to misunderstandings of what your code does; in other words, you have incurred too much debt to risk not refactoring.

;;;;; ;;;;;;

Similarly, if you are unsure what a piece of code does, make sure you write a unit test that verifies your understanding.

;;;;;; ;;;;;;

These are just rules of thumb that I follow so that I don't waste too much time in the debugger. So the actual time that I spend refactoring actually varies greatly depending on the complexity of the code and what stage I am in development. Furthermore, if the code is overly complex, that's usually a sign that classes have to be broken into smaller, easier-to-understand and easier-to-maintain pieces.

;;;;;; " 1433741"",""Test"",""

My approach to refactoring is to do it at the same time as fixing things. The benefits of refactoring only ever come when maintaining code, and so there is very little benefit to refactoring code which doesnt have many bugs and doesnt require any new features.

";;;;;; ;;;;;;

Whenever I'm fixing a bug I look for ways to refactor then, i.e. refactoring time is included in the writing code / debugging category (which I would argue is two separate categories).

;;;;;; " 1433741"",""Test"",""

What is your design approach? Waterfall? Agile? How big is your project? How big is your team?

";;;;;; ;;;;;;

The most productive I've been while doing Agile development tends towards 33/33/33, or maybe even 30/30/40. Once you've written the test, and then written the code to pass the test, then you can refactor and hone the code, confident that you're not breaking anything.

;;;;;; ;;;;;;

On a small project, you can hypothetically architect your code perfectly and never have to test/refactor (I've never seen this actually work). In a large project, or one with many hands in it, or one with many customers asking for many different things, refactoring and tests are far more important than the code itself.

;;;;;; ;;;;;;

It's like asking, over the lifetime of a house, haw many times you should build the house, how many times you should consult the building code, and how many times you should perform maintenance. Obviously building the house is the most important thing, and in theory, you can architect a 'perfect' house that will require no renovation down the line, but it's unlikely.

;;;;;; ;;;;;;

You will more likely spend a year or two building the house, and the rest of the house's duration periodically renovating. Reinforcing the load-bearing members is more important than building a deck, even if your clients are asking for a deck. They'll be unhappy, but they'll be even more unhappy if the roof falls in and all they have to live on is a deck.

;;;;;; ;;;;;;

Likewise, you'd spend X amount of time writing the code, but a larger amount of the time refactoring and optimizing it through the lifecycle of the project.

;;;;;; " 1433741"",""Test"",""

I don't really designate a separate amount of time for things like refactoring, unit testing, and documentation. I just consider them to be a part of the finished product, and the job's not done until they are.

";;;;;; " 1433741"",""Test"",""

I doubt there are any norms.

";;;;;; ;;;;;;

To your breakdown: most teams do not write unit tests and do not refactor (until something breaks or stalls development). Most commonly, refactoring time allotment is < 1 %.

;;;;; ;;;;;;

If you're interested in good practice then....

;;;;;; ;;;;;;
    ;;;;;;
  • Refactoring may be an ongoing activity as part of the development process. You see an improvement potential and you personally assign some small time to make things better. Here refactoring time < 5%.

  • ;;;;;
  • You perform regular code reviews. Say, once in a few months. Then you can dedicate a few days exclusively for the team to only review their code and improve it. Here also < 5%.

  • ;;;;;
;;;;;; " 1433741"",""Test"",""

First point, writing code/debugging/refactoring is IMO a unique activity that should occur during all the project's life. A perfect design doesn't really exist as a design is something ephemeral. Something perfect today can be totally invalidated by new requirements tomorrow.

";;;;;; ;;;;;;

Second point, I've seen many projects where writing unit tests takes more time than writing code to make them pass.

;;;;;; ;;;;;;

So to me, ratios are more like:

;;;;;; ;;;;;;
    ;;;;;;
  • ;;;;;;

    50%: unit tests

    ;;;;;;
  • ;;;;;;
  • the rest: coding/debugging/refactoring/documentation
  • ;;;;;;
;;;;;; " 1433741"",""Test"",""

Your comment says that you have millions of lines of code but no unit tests, and that you are having a hard time convincing management that unit tests are worth it. According to Fowler's book, refactoring needs to be accompanied by unit tests to provide the confidence that you're not breaking anything while you refactor. I would agree, and I'd suggest that unit tests are going to provide more value than anything else at this stage, so aim first for that goal. I strongly recommend Michael Feathers' book """"Working Effectively with Legacy Code"""" for suggestions as to how to do this. You don't even have to write more than a few unit tests to make it a worthwhile effort, just get the framework running.

";;;;;; ;;;;;;

Step 0: get an automated unit testing framework harnessed into your code.

;;;;;; ;;;;;;

You're not going to try to accomplish this alone, are you? This is a big project, and I expect you are part of a senior technical team who shares the pain with you. You need to get all of them to buy into this 100%. You'll need their backing when you go to your boss, you'll need their expertise to share in creating the design, and you'll need their total agreement on the design.

;;;;;; ;;;;;;

Step 1: gather a posse.

;;;;;; ;;;;;;

Without a plan and a goal, refactoring isn't going to help much. Are you hoping to just to chop the code up and make modules smaller? Are you going to get code organized into domains? Are you going to try to wedge some service interfaces into it? Are you going to refactor to an n-tier architecture? What do you and the posse think needs doing? And how are you going to communicate this design and refactoring plan to the SEs?

;;;;;; ;;;;;;

Step 2: get the posse to do some initial architectural design and planning of the end state.

;;;;;; ;;;;;; "

Now for the hard part. You're asking for 20% of 30 engineers' time, which is probably over $500,000 per year. You're going to need a lot more justification than """"accumulated technical debt."""" You're going to need to show return on investment.

";;;;;; ;;;;;; "

So be ready to answer the question your boss is certain to ask: """"why should I?"""" What are you expecting to gain by refactoring? Will you reduce development effort on new features by 10%? 100%? Will you increase code quality/reduce bugs/reduce support costs? Will you speed up time-to-market? By how much? Will this let you reduce SE or contractor headcount? How many? Or will you be able to add more features per release? There are also negatives: how many features will be delayed if you are given a year to monkey around with refactoring? By how long will they be delayed?

";;;;;; ;;;;;;

Step 3: do some serious estimating.

;;;;;; ;;;;;; "

So now that you're armed with a design, a plan, monetary justification, and you have the backing of the technical staff, go back to your boss and present your case to him or her. You'll have a lot better luck than saying """"we should spend 20% of our time refactoring, some guys on the internet said so.""""

";;;;;; " 1433741"",""Test"",""