""";;;;;;
"2143368,""Test"",""Given a short (2-week) sprint, is it ever acceptable to forgo TDD to """"get things done""""?""";;;;;;
"3476054,""Test"",""I'm strongly considering adding unit testing to an existing project that is in production. It was started 18 months ago before I could really see any benefit of TDD (face palm), so now it's a rather large solution with a number of projects and I haven't the foggiest idea where to start in adding unit tests. What's making me consider this is that occasionally an old bug seems to resurface, or a bug is checked in as fixed without really being fixed. Unit testing would reduce or prevents these issues occuring.
";;;;;;
;;;;;;
"By reading similar questions on SO, I've seen recommendations such as starting at the bug tracker and writing a test case for each bug to prevent regression. However, I'm concerned that I'll end up missing the big picture and end up missing fundamental tests that would have been included if I'd used TDD from the get go.
";;;;;;
;;;;;;
Are there any process/steps that should be adhered to in order to ensure that an existing solutions is properly unit Testd and not just bodged in? How can I ensure that the tests are of a good quality and aren't just a case of any test is better than no tests.
;;;;;;
;;;;;;
So I guess what I'm also asking is;
;;;;;
;;;;;;
;;;;;;
- Is it worth the effort for an;;;;;;
existing solution that's in production?
;;;;;;
- Would it better to ignore the testing;;;;;;
for this project and add it in a;;;;;;
possible future re-write?
;;;;;;
- What will be more benefical; spending;;;;;
a few weeks adding tests or a few;;;;;;
weeks adding functionality?
;;;;;;
;;;;;;
;;;;;;
(Obviously the answer to the third point is entirely dependant on whether you're speaking to management or a developer)
;;;;;;
;;;;;;
;;;;;;
;;;;;;
Reason for Bounty
;;;;;;
;;;;;;
Adding a bounty to try and attract a broader range of answers that not only confirm my existing suspicion that it is a good thing to do, but also some good reasons against.
;;;;;;
;;;;;;
I'm aiming to write this question up later with pros and cons to try and show management that it's worth spending the man hours on moving the future development of the product to TDD. I want to approach this challenge and develop my reasoning without my own biased point of view.
;;;;;;
"
3476054"",""Test"",""I've introduced unit tests to code bases that did not have it previously. The last big project I was involved with where I did this the product was already in production with zero unit tests when I arrived to the team. When I left - 2 years later - we had 4500+ or so tests yielding about 33 % code coverage in a code base with 230 000 + production LOC (real time financial Win-Forms application). That may sound low, but the result was a significant improvement in code quality and defect rate - plus improved morale and profitability.
";;;;;;
;;;;;;
It can be done when you have both an accurate understanding and commitment from the parties involved.
;;;;;;
;;;;;;
"First of all, it is important to understand that unit testing is a skill in itself. You can be a very productive programmer by """"conventional"""" standards and still struggle to write unit tests in a way that scales in a larger project.
";;;;;;
;;;;;;
"Also, and specifically for your situation, adding unit tests to an existing code base that has no tests is also a specialized skill in itself. Unless you or somebody in your team has successful experience with introducing unit tests to an existing code base, I would say reading Feather's book is a requirement (not optional or strongly recommended).
";;;;;;
;;;;;;
Making the transition to unit testing your code is an investment in people and skills just as much as in the quality of the code base. Understanding this is very important in terms of mindset and managing expectations.
;;;;;;
;;;;;;
Now, for your comments and questions:
;;;;;;
;;;;;;
;;;;;;
However, I'm concerned that I'll end up missing the big picture and end up missing fundamental tests that would have been included if I'd used TDD from the get go.
;;;;;;
;;;;;;
;;;;;;
Short answer: Yes, you will miss tests and yes they might not initially look like what they would have in a green field situation.
;;;;;;
;;;;;;
Deeper level answer is this: It does not matter. You start with no tests. Start adding tests, and refactor as you go. As skill levels get better, start raising the bar for all newly written code added to your project. Keep improving etc...
;;;;;;
;;;;;;
"Now, reading in between the lines here I get the impression that this is coming from the mindset of """"perfection as an excuse for not taking action"""". A better mindset is to focus on self trust. So as you may not know how to do it yet, you will figure out how to as you go and fill in the blanks. Therefore, there is no reason to worry.
";;;;;;
;;;;;;
"Again, its a skill. You can not go from zero tests to TDD-perfection in one """"process"""" or """"step by step"""" cook book approach in a linear fashion. It will be a process. Your expectations must be to make gradual and incremental progress and improvement. There is no magic pill.
";;;;;;
;;;;;;
"The good news is that as the months (and even years) pass, your code will gradually start to become """"proper"""" well factored and well Testd code.
";;;;;;
;;;;;;
As a side note. You will find that the primary obstacle to introducing unit tests in an old code base is lack of cohesion and excessive dependencies. You will therefore probably find that the most important skill will become how to break existing dependencies and decoupling code, rather than writing the actual unit tests themselves.
;;;;;;
;;;;;;
;;;;;;
Are there any process/steps that should be adhered to in order to ensure that an existing solutions is properly unit Testd and not just bodged in?
;;;;;;
;;;;;;
;;;;;;
Unless you already have it, set up a build server and set up a continuous integration build that runs on every checkin including all unit tests with code coverage.
;;;;;;
;;;;;;
Train your people.
;;;;;;
;;;;;;
Start somewhere and start adding tests while you make progress from the customer's perspective (see below).
;;;;;;
;;;;;;
Use code coverage as a guiding reference of how much of your production code base is under test.
;;;;;;
;;;;;;
Build time should always be FAST. If your build time is slow, your unit testing skills are lagging. Find the slow tests and improve them (decouple production code and test in isolation). Well written, you should easilly be able to have several thousands of unit tests and still complete a build in under 10 minutes (~1-few ms / test is a good but very rough guideline, some few exceptions may apply like code using reflection etc).
;;;;;;
;;;;;;
Inspect and adapt.
;;;;;;
;;;;;;
;;;;;;
How can I ensure that the tests are of a good quality and aren't just a case of any test is better than no tests.
;;;;;;
;;;;;;
;;;;;;
Your own judgement must be your primary source of reality. There is no metric that can replace skill.
;;;;;;
;;;;;;
If you don't have that experience or judgement, consider contracting someone who does.
;;;;;;
;;;;;;
Two rough secondary indicators are total code coverage and build speed.
;;;;;;
;;;;;;
;;;;;;
Is it worth the effort for an existing solution that's in production?
;;;;;;
;;;;;;
;;;;;;
Yes. The vast majority of the money spent on a custom built system or solution is spent after it is put in production. And investing in quality, people and skills should never be out of style.
;;;;;;
;;;;;;
;;;;;;
Would it better to ignore the testing for this project and add it in a possible future re-write?
;;;;;;
;;;;;;
;;;;;;
You would have to take into consideration, not only the investment in people and skills, but most importantly the total cost of ownership and the expected life time of the system.
;;;;;;
;;;;;;
"My personal answer would be """"yes of course"""" in the majority of cases because I know its just so much better, but I recognize that there might be exceptions.
";;;;;;
;;;;;;
;;;;;;
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
;;;;;
;;;;;;
;;;;;;
Neither. Your approach should be to add tests to your code base WHILE you are making progress in terms of functionality.
;;;;;;
;;;;;;
Again, it is an investment in people, skills AND the quality of the code base and as such it will require time. Team members need to learn how to break dependencies, write unit tests, learn new habbits, improve discipline and quality awareness, how to better design software, etc. It is important to understand that when you start adding tests your team members likely don't have these skills yet at the level they need to be for that approach to be successful, so stopping progress to spend all time to add a lot of tests simply won't work.
;;;;;;
;;;;;;
Also, adding unit tests to an existing code base of any sizeable project size is a LARGE undertaking which requires commitment and persistance. You can't change something fundamental, expect a lot of learning on the way and ask your sponsor to not expect any ROI by halting the flow of business value. That won't fly, and frankly it shouldn't.
;;;;;;
;;;;;;
Thirdly, you want to instill sound business focus values in your team. Quality never comes at the expense of the customer and you can't go fast without quality. Also, the customer is living in a changing world, and your job is to make it easier for him to adapt. Customer alignment requires both quality and the flow of business value.
;;;;;;
;;;;;;
What you are doing is paying off technical debt. And you are doing so while still serving your customers ever changing needs. Gradually as debt is paid off, the situation improves, and it is easier to serve the customer better and deliver more value. Etc. This positive momentum is what you should aim for because it underlines the principles of sustainable pace and will maintain and improve moral - both for your development team, your customer and your stakeholders.
;;;;;;
;;;;;;
Hope that helps
;;;;;;
"
3476054"",""Test"",""The problem with retrofitting unit tests is you'll realise you didn't think of injecting a dependency here or using an interface there, and before long you'll be rewriting the entire component. If you have the time to do this, you'll build yourself a nice safety net, but you could have introduced subtle bugs along the way.
";;;;;;
;;;;;;
I've been involved with many projects which really needed unit tests from day one, and there is no easy way to get them in there, short of a complete rewrite, which cannot usually be justified when the code is working and already making money. Recently, I have resorted to writing powershell scripts that exercise the code in a way that reproduces a defect as soon as it is raised and then keeping these scripts as a suite of regression tests for further changes down the line. That way you can at least start to build up some tests for the application without changing it too much, however, these are more like end to end regression tests than proper unit tests.
;;;;;;
"
3476054"",""Test"",""";;;;;;
;;;;;;
- Is it worth the effort for an existing solution that's in production?
;;;;;;
;;;;;;
;;;;;;
;;;;;;
Yes!
;;;;;;
;;;;;;
;;;;;;
;;;;;;
- Would it better to ignore the testing for this project and add it in a possible future re-write?
;;;;;;
;;;;;;
;;;;;;
;;;;;;
No!
;;;;;;
;;;;;;
;;;;;;
;;;;;;
- What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
;;;;;
;;;;;;
;;;;;;
;;;;;;
Adding testing (especially automated testing) makes it much easier to keep the project working in the future, and it makes it significantly less likely that you'll ship stupid problems to the user.
;;;;;;
;;;;;;
Tests to put in a priori are ones that check whether what you believe the public interface to your code (and each module in it) is working the way you think. If you can, try to also induce each isolated failure mode that your code modules should have (note that this can be non-trivial, and you should be careful to not check too carefully how things fail, e.g., you don't really want to do things like counting the number of log messages produced on failure, since verifying that it is logged at all is enough).
;;;;;;
;;;;;;
Then put in a test for each current bug in your bug database that induces exactly the bug and which will pass when the bug is fixed. Then fix those bugs! :-)
;;;;;;
;;;;;;
It does cost time up front to add tests, but you get paid back many times over at the back end as your code ends up being of much higher quality. That matters enormously when you're trying to ship a new version or carry out maintenance.
;;;;;;
"
3476054"",""Test"",""Yes.";;;;;;
No.;;;;;;
Adding tests.
;;;;;;
;;;;;;
Going towards a more TDD approach will actually better inform your efforts to add new functionality and make regression testing much easier. Check it out!
;;;;;;
"
3476054"",""Test"",""It depends...
";;;;;;
It's great to have unit tests but you need to consider who your users are and what they are willing to tolerate in order to get a more bug-free product. Inevitably by refactoring your code which has no unit tests at present, you will introduce bugs and many users will find it hard to understand that you are making the product temporarily more defective to make it less defective in the long run. Ultimately it's the users who will have the final say...
;;;;;;
"
3476054"",""Test"",""If I were in your place, I would probably take an outside-in approach, starting with functional tests that exercise the whole system. I would try to re-document the system's requirements using a BDD specification language like RSpec, and then write tests to verify those requirements by automating the user interface.
";;;;;;
;;;;;;
Then I would do defect driven development for newly discovered bugs, writing unit tests to reproduce the problems, and work on the bugs until the tests pass.
;;;;;;
;;;;;;
For new features, I would stick with the outside-in approach: Start with features documented in RSpec and verified by automating the user interface (which will of course fail initially), then add more finely-grained unit tests as the implementation moves along.
;;;;;;
;;;;;;
I'm no expert on the process, but from what little experience I have I can tell you that BDD via automated UI testing is not easy, but I think it's worth the effort, and probably would yield the most benefit in your case.
;;;;;;
"
3476054"",""Test"",""You say you don't want to buy another book. So just read Michael Feather's article on working effectively with legacy code. Then buy the book :)
";;;;;;
"
3476054"",""Test"",""I'm not a seasoned TDD expert by any means, but of course I would say that it's incredibly important to unit test as much as you can. Since the code is already in place, I would start by getting some sort of unit test automation in place. I use TeamCity to exercise all of the tests in my projects, and it gives you a nice summary of how the components did.
";;;;;;
;;;;;;
With that in place, I'd move on to those really critical business logic-like components that can't fail. In my case, there are some basic trigometry problems that need to be solved for various inputs, so I test the heck out of those. The reason I do this is that when I'm burning the midnight oil, it's very easy to waste time digging down to depths of code that really don't need to be touched, because you know they are Testd for all of the possible inputs (in my case, there is a finite number of inputs).
;;;;;;
;;;;;;
Ok, so now you hopefully feel better about those critical pieces. Instead of sitting down and banging out all of the tests, I would attack them as they come up. If you hit a bug that's a real PITA to fix, write the unit tests for it and get them out of the way.
;;;;;;
;;;;;;
"There are cases where you'll find that testing is tough because you can't instantiate a particular class from the test, so you have to mock it. Oh, but maybe you can't mock it easily because you didn't write to an interface. I take these """"whoops"""" scenarios as an opportunity to implement said interface, because, well, it's a Good Thing.
";;;;;;
;;;;;;
From there, I'd get your build server or whatever automation you have in place configured with a code coverage tool. They create nasty bar graphs with big red zones where you have poor coverage. Now 100% coverage isn't your goal, nor would 100% coverage necessarily mean your code is bulletproof, but the red bar definitely motivates me when I have free time. :)
;;;;;;
"
3476054"",""Test"",""I suggest reading a brilliant article by a TopTal Engineer, that explains where to start adding tests: it contains a lot of maths, but the basic idea is:
";;;;
;;;;;;
1) Measure your code's Afferent Coupling (CA) (how much a class is used by other classes, meaning breaking it would cause widespread damage)
;;;;;;
;;;;;;
2) Measure your code's Cyclomatic Complexity (CC) (higher complexity = higher change of breaking)
;;;;;;
;;;;;;
;;;;;;
You need to identify classes with high CA and CC, i.e. have a function f(CA,CC) and the classes with the smallest differences between the two metrics should be given the highest priority for test coverage.
;;;;;;
;;;;;;
;;;;;;
Why? Because a high CA but very low CC classes are very important but unlikely to break. On the other hand, low CA but high CC are likely to break, but will cause less damage. So you want to balance.
;;;;;;
"
3476054"",""Test"",""There is so many good answers so I will not repeat their content. I checked your profile and it seems you are C# .NET developer. Because of that I'm adding reference to Microsoft PEX and Moles project which can help you with autogenerating unit tests for legacy code. I know that autogeneration is not the best way but at least it is the way to start. Check this very interesting article from MSDN magazine about using PEX for legacy code.
";;;;;;
"
3476054"",""Test"",""It's unlikely you'll ever have significant test coverage, so you must be tactical about where you add tests:
";;;;;;
;;;;;;
;;;;;;
- As you mentioned, when you find a bug, it's a good time to write a test (to reproduce it), and then fix the bug. If you see the test reproduce the bug, you can be sure it's a good, alid test. Given such a large portion of bugs are regressions (50%?), it's almost always worth writing regression tests.
;;;;;;
"- When you dive into an area of code to modify it, it's a good time to write tests around it. Depending on the nature of the code, different tests are appropriate. One good set of advice is found here.
";;;;;;
;;;;;;
;;;;;;
OTOH, it's not worth just sitting around writing tests around code that people are happy with-- especially if nobody is going to modify it. It just doesn't add value (except maybe understanding the behavior of the system).
;;;;;;
;;;;;;
Good luck!
;;;;;;
"
3476054"",""Test"",""Yes it can: Just try to make sure all code you write from now has a test in place.
";;;;;;
;;;;;;
If the code that is already in place needs to be modified and can be Testd, then do so, but it is better not to be too vigorous in trying to get tests in place for stable code. That sort of thing tends to have a knock-on effect and can spiral out of control.
;;;;;;
"
3476054"",""Test"",""";;;;;;
- yes, it is. when you start adding new functionality it can cause some old code modification and as results it is a source of potential bugs.
;;;;;;
- (see the first one) before you start adding new functionality all (or almost) code (ideally) should be covered by unit tests.
;;;;;;
"- (see the first and second one) :). a new grandiose functionality can """"destroy"""" the old worked code.
";;;;;;
;;;;;;
"
3476054"",""Test"",""Update
";;;;;;
;;;;;;
6 years after the original answer, I have a slightly different take.
;;;;;;
;;;;;;
I think it makes sense to add unit tests to all new code you write - and then refactor places where you make changes to make them testable.
;;;;;;
;;;;;;
Writing tests in one go for all your existing code will not help - but not writing tests for new code you write (or areas you modify) also doesn't make sense. Adding tests as you refactor/add things is probably the best way to add tests and make the code more maintainable in an existing project with no tests.
;;;;;;
;;;;;;
Earlier answer
;;;;;;
;;;;;;
Im going to raise a few eyebrows here :)
;;;;;;
;;;;;;
First of all what is your project - if it is a compiler or a language or a framework or anything else that is not going to change functionally for a long time, then I think its absolutely fantastic to add unit tests.
;;;;;;
;;;;;;
However, if you are working on an application that is probably going to require changes in functionality (because of changing requirements) then there is no point in taking that extra effort.
;;;;;;
;;;;;;
Why?
;;;;;;
;;;;;;
;;;;;;
Unit tests only cover code tests - whether the code does what it is designed to - it is not a replacement for manual testing which anyways has to be done (to uncover functional bugs, usability issues and all other kinds of issues)
;;;;;;
Unit tests cost time! Now where I come from, that's a precious commodity - and business generally picks better functionality over a complete test suite.
;;;;;;
If your application is even remotely useful to users, they are going to request changes - so you will have versions that will do things better, faster and probably do new things - there may also be a lot of refactoring as your code grows. Maintaining a full grown unit test suite in a dynamic environment is a headache.
;;;;;;
Unit tests are not going to affect the perceived quality of your product - the quality that the user sees. Sure, your methods might work exactly as they did on day 1, the interface between presentation layer and business layer might be pristine - but guess what? The user does not care! Get some real Testrs to test your application. And more often than not, those methods and interfaces have to change anyways, sooner or later.
;;;;;;
;;;;;;
;;;;;;
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality? - There are hell lot of things that you can do better than writing tests - Write new functionality, improve performance, improve usability, write better help manuals, resolve pending bugs, etc etc.
;;;;;
;;;;;;
Now dont get me wrong - If you are absolutely positive that things are not going to change for next 100 years, go ahead, knock yourself out and write those tests. Automated Tests are a great idea for APIs as well, where you absolutely do not want to break third party code. Everywhere else, its just something that makes me ship later!
;;;;;;
"
3476054"",""Test"",""";;;;;;
Is it worth the effort for an existing solution that's in production?;;;;;;
;;;;;;
Yes. But you don't have to write all unit tests to get started. Just add them one by one.;;;;;;
;;;;;;
;;;;;;
Would it better to ignore the testing for this project and add it in a possible future re-write?;;;;;;
;;;;;;
No. First time you are adding code which breaks the functionality, you will regret it.;;;;;;
;;;;;;
;;;;;;
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?;;;;;
;;;;;;
For new functionality (code) it is simple. You write the unit test first and then the functionality.;;;;;;
For old code you decide on the way. You don't have to have all unit tests in place... Add the ones that hurt you most not having... Time (and errors) will tell on which one you have to focus ;);;;;;
"
3476054"",""Test"",""I'm very fond of Refactor the Low-hanging Fruit as an answer to the question of where to begin refactoring. It's a way to ease into better design without biting off more than you can chew.
";;;;;;
;;;;;;
I think the same logic applies to TDD - or just unit tests: write the tests you need, as you need them; write tests for new code; write tests for bugs as they appear. You're worried about neglecting harder-to-reach areas of the code base, and it's certainly a risk, but as a way to get started: get started! You can mitigate the risk down the road with code coverage tools, and the risk isn't (in my opinion) that big, anyway: if you're covering the bugs, covering the new code, covering the code you're looking at, then you're covering the code that has the greatest need for tests.
;;;;
"
3476054"",""Test"",""I would like to start this answer by saying that unit testing is really important because it will help you arrest bugs before they creep into production.
";;;;;;
;;;;;;
Identify the areas projects/modules where bugs have been re-introduced. start with those projects to write tests. It perfectly makes sense to write tests for new functionality and for bug fix.
;;;;;;
;;;;;;
;;;;;;
Is it worth the effort for an existing;;;;;;
solution that's in production?
;;;;;;
;;;;;;
;;;;;;
Yes. You will see the effect of bugs coming down and maintenance becoming easier
;;;;;;
;;;;;;
;;;;;;
Would it better to ignore the testing;;;;;;
for this project and add it in a;;;;;;
possible future re-write?
;;;;;;
;;;;;;
;;;;;;
I would recommend to start if from now.
;;;;;;
;;;;;;
;;;;;;
What will be more benefical; spending;;;;;
a few weeks adding tests or a few;;;;;;
weeks adding functionality?
;;;;;;
;;;;;;
;;;;;;
You are asking the wrong question. Definitely, functionality is more important than anything else. But, rather you should ask if spending a few weeks adding test will make my system more stable. Will this help my end user? Will it help a new developer in the team to understand the project and also to ensure that he/she, doesn't introduce a bug due to lack of understanding of the overall impact of a change.
;;;;;;
"
3476054"",""Test"",""You don't mention the implementation language, but if in Java then you could try this approach:
";;;;;;
;;;;;;
;;;;;;
In a seperate source tree build regression or 'smoke' tests, using a tool to generate them, which might get you close to 80% coverage. These tests execute all the code logic paths, and verify from that point on that the code still does exactly what it does currently (even if a bug is present). This gives you a safety net against inadvertently changing behaviour when doing the necessary refactoring to make code easily unit testable by hand.
;;;;;;
;;;;;;
;;;;;;
For each bug you fix, or feature you add from now on, use a TDD approach to ensure new code is designed to be testable and place these tests in a normal test source tree.
;;;;;;
Existing code will also likely need to be changed, or refactored to make it testable as part of adding new features; your smoke tests will give you a safety net against regressions or inadvertent subtle changes to behaviour.
;;;;;
When making changes (bug fixes or features) via TDD, when complete it's likely the companion smoke test is failing. Verify the failures are as expected due to the changes made and remove the less readable smoke test, as your hand written unit test has full coverage of that improved component. Ensure that your test coverage does not decline only stay the same or increase.
;;;;;;
When fixing bugs write a failing unit test that exposes the bug first.
;;;;;;
;;;;;;
"
3476054"",""Test"",""Whether it's worth adding unit tests to an app that's in production depends on the cost of maintaining the app. If the app has few bugs and enhancement requests, then maybe it's not worth the effort. OTOH, if the app is buggy or frequently modified then unit tests will be hugely beneficial.
";;;;;;
;;;;;;
"At this point, remember that I'm talking about adding unit tests selectively, not trying to generate a suite of tests similar to those that would exist if you had practiced TDD from the start. Therefore, in response to the second half of your second question: make a point of using TDD on your next project, whether it's a new project or a re-write (apologies, but here is a link to another book that you really should read: Growing Object Oriented Software Guided by Tests)
";;;;;;
;;;;;;
My answer to your third question is the same as the first: it depends on the context of your project.
;;;;;;
;;;;;;
Embedded within you post is a further question about ensuring that any retro-fitted testing is done properly. The important thing to ensure is that unit tests really are unit tests, and this (more often than not) means that retrofitting tests requires refactoring existing code to allow decoupling of your layers/components (cf. dependency injection; inversion of control; stubbing; mocking). If you fail to enforce this then your tests become integration tests, which are useful, but less targeted and more brittle than true unit tests.
;;;
"
3476054"",""Test"",""I'll add my voice and say yes, it's always useful!
";;;;;;
;;;;;;
There are some distinctions you should keep in mind, though: black-box vs white-box, and unit vs functional. Since definitions vary, here's what I mean by these:
;;;;;;
;;;;;;
;;;;;;
- Black-box = tests that are written without special knowledge of the implementation, typically poking around at the edge cases to make sure things happen as a naive user would expect.
;;;;;;
- White-box = tests that are written with knowledge of the implementation, which often try to exercise well-known failure points.
;;;;;;
- Unit tests = tests of individual units (functions, separable modules, etc). For example: making sure your array class works as expected, and that your string comparison function returns the expected results for a wide range of inputs.
;;;;;;
- Functional tests = tests of the entire system all at once. These tests will exercise a big chunk of the system all at once. For example: init, open a connection, do some real-world stuff, close down, terminate. I like to draw a distinction between these and unit tests, because they serve a different purpose.
;;;;;;
;;;;;;
;;;;;;
When I've added tests to a shipping product late in the game, I found that I got the most bang for the buck from white-box and functional tests. If there's any part of the code that you know is especially fragile, write white-box tests to cover the problem cases to help make sure it doesn't break the same way twice. Similarly, whole-system functional tests are a useful sanity check that helps you make sure you never break the 10 most common use cases.
;;;;;;
;;;;;;
Black-box and unit tests of small units are useful too, but if your time is limited, it's better to add them early. By the time you're shipping, you've generally found (the hard way) the majority of the edge cases and problems that these tests would have found.
;;;;;;
;;;;;;
Like the others, I'll also remind you of the two most important things about TDD:
;;;;;;
;;;;;;
;;;;;;
- Creating tests is a continuous job. It never stops. You should try to add new tests every time you write new code, or modify existing code.
;;;;;;
- Your test suite is never infallible! Don't let the fact that you have tests lull you into a false sense of security. Just because it passes the test suite doesn't mean it's working correctly, or that you haven't introduced a subtle performance regression, etc.
;;;;;;
;;;;;;
"
3476054"",""Test"",""It's absolutely worth it. Our app has complex cross-validation rules, and we recently had to make significant changes to the business rules. We ended up with conflicts that prevented the user from saving. I realized it would take forever to sort it out in the applcation (it takes several minutes just to get to the point where the problems were). I'd wanted to introduce automated unit tests and had the framework installed, but I hadn't done anything beyond a couple of dummy tests to make sure things were working. With the new business rules in hand, I started writing tests. The tests quickly identified the conditions that caused the conflicts, and we were able to get the rules clarified.
";;;;;;
;;;;;;
If you write tests that cover the functionality you're adding or modifying, you'll get an immediate benefit. If you wait for a re-write, you may never have automated tests.
;;;;;;
;;;;;;
You shouldn't spend a lot of time writing tests for existing things that already work. Most of the time, you don't have a specification for the existing code, so the main thing you're testing is your reverse-engineering ability. On the other hand, if you're going to modify something, you need to cover that functionality with tests so you'll know you made the changes correctly. And of course, for new functionality, write tests that fail, then implement the missing functionality.
;;;;;;
"
3476054"",""Test"",""When we started adding tests, it was to a ten-year-old, approximately million-line codebase, with far too much logic in the UI and in the reporting code.
";;;;;;
;;;;;;
One of the first things we did (after setting up a continuous build server) was to add regression tests. These were end-to-end tests.
;;;;;;
;;;;;;
;;;;;;
- Each test suite starts by initializing the database to a known state. We actually have dozens of regression datasets that we keep in Subversion (in a separate repository from our code, because of the sheer size). Each test's FixtureSetUp copies one of these regression datasets into a temp database, and then runs from there.
;;;;;;
- The test fixture setup then runs some process whose results we're interested in. (This step is optional -- some regression tests exist only to test the reports.)
;;;;;;
- Then each test runs a report, outputs the report to a .csv file, and compares the contents of that .csv to a saved snapshot. These snapshot .csvs are stored in Subversion next to each regression dataset. If the report output doesn't match the saved snapshot, the test fails.
;;;;;;
;;;;;;
;;;;;;
The purpose of regression tests is to tell you if something changes. That means they fail if you broke something, but they also fail if you changed something on purpose (in which case the fix is to update the snapshot file). You don't know that the snapshot files are even correct -- there might be bugs in the system (and then when you fix those bugs, the regression tests will fail).
;;;;;;
;;;;;;
Nevertheless, regression tests were a huge win for us. Just about everything in our system has a report, so by spending a few weeks getting a test harness around the reports, we were able to get some level of coverage over a huge part of our code base. Writing the equivalent unit tests would have taken months or years. (Unit tests would have given us far better coverage, and would have been far less fragile; but I'd rather have something now, rather than waiting years for perfection.)
;;;;;
;;;;;;
Then we went back and started adding unit tests when we fixed bugs, or added enhancements, or needed to understand some code. Regression tests in no way remove the need for unit tests; they're just a first-level safety net, so that you get some level of test coverage quickly. Then you can start refactoring to break dependencies, so you can add unit tests; and the regression tests give you a level of confidence that your refactoring isn't breaking anything.
;;;;
;;;;;;
Regression tests have problems: they're slow, and there are too many reasons why they can break. But at least for us, they were so worth it. They've caught countless bugs over the last five years, and they catch them within a few hours, rather than waiting for a QA cycle. We still have those original regression tests, spread over seven different continuous-build machines (separate from the one that runs the fast unit tests), and we even add to them from time to time, because we still have so much code that our 6,000+ unit tests don't cover.
;;;;;;
"
3476054"",""Test"",""I do agree with what most everyone else has said. Adding tests to existing code is valuable. I will never disagree with that point, but I would like to add one caveat.
";;;;;;
;;;;;;
Although adding tests to existing code is valuable, it does come at a cost. It comes at the cost of not building out new features. How these two things balance out depends entirely on the project, and there are a number of variables.
;;;;;;
;;;;;;
;;;;;;
- How long will it take you to put all that code under test? Days? Weeks? Months? Years?
;;;;;;
- Who are you writing this code for? Paying customers? A professor? An open source project?
;;;;;;
- What is your schedule like? Do you have hard deadlines you must meet? Do you have any deadlines at all?
;;;;;;
;;;;;;
;;;;;;
Again, let me stress, tests are valuable and you should work to put your old code under test. This is really more a matter of how you approach it. If you can afford to drop everything and put all your old code under test, do it. If that's not realistic, here's what you should do at the very least
;;;;;;
;;;;;;
;;;;;;
- Any new code you write should be completely under unit test
;;;;;;
- Any old code you happen to touch (bug fix, extension, etc.) should be put under unit test
;;;;;;
;;;;;;
;;;;;;
Also, this is not an all or nothing proposition. If you have a team of, say, four people, and you can meet your deadlines by putting one or two people on legacy testing duty, by all means do that.
;;;;;;
;;;;;;
Edit:
;;;;;;
;;;;;;
;;;;;;
I'm aiming to write this question up later with pros and cons to try and show management that it's worth spending the man hours on moving the future development of the product to TDD.
;;;;;;
;;;;;;
;;;;;;
"This is like asking """"What are the pros and cons to using source control?"""" or """"What are the pros and cons to interviewing people before hiring them?"""" or """"What are the pros and cons to breathing?""""
";;;;;;
;;;;;;
Sometimes there is only one side to the argument. You need to have automated tests of some form for any project of any complexity. No, tests don't write themselves, and, yes, it will take a little extra time to get things out the door. But in the long run it will take more time and cost more money to fix bugs after the fact than write tests up front. Period. That's all there is to it.
;;;;;;
"
3476054"",""Test"",""""";;;;;;
"3476054,""Test"",""Can unit testing be successfully added into an existing production project? If so, how and is it worth it?""";;;;;;
"3663075,""Test"",""I have a Rails application with over 2,000 examples in my RSpec tests. Needless to say, it's a large application and there's a lot to be Testd. Running these tests at this point is very inefficient and because it takes so long, we're almost at the point of being discouraged from writing them before pushing a new build. I added --profile to my spec.opts to find the longest running examples and there are at least 10 of them that take an average of 10 seconds to run. Is that normal amongst you RSpec experts? Is 10 seconds entirely too long for one example? I realize that with 2,000 examples, it will take a non-trivial amount of time to test everything thoroughly - but at this point 4 hours is a bit ludicrous.
";;;;;;
;;;;;;
What kind of times are you seeing for your longest running examples? What can I do to troubleshoot my existing specs in order to figure out bottlenecks and help speed things up. Every minute would really help at this point.
;;;;;;
"
3663075"",""Test"",""10 seconds is a very long time for any single test to run. My gut feeling is that your spec target is running both unit and integration tests at the same time. This is a typical thing that projects fall into and at some stage, you will need to overcome this technical debt if you want to produce more, faster. There are a number of strategies which can help you to do this... and I'll recommend a few that I have used in the past.
";;;;;;
;;;;;;
1. Separate Unit From Integration Tests
;;;;;;
;;;;;;
The first thing I would do is to separate unit from integration tests. You can do this either by:
;;;;;;
;;;;;;
;;;;;;
- Moving them (into separate folders under the spec directory) - and modifying the rake targets
;;;;;;
- Tagging them (rspec allows you to tag your tests)
;;;;;;
;;;;;;
;;;;;;
The philosophy goes, that you want your regular builds to be quick - otherwise people won't be too happy to run them often. So get back to that territory. Get your regular tests to run quick, and use a continuous integration server to run the more complete build.
;;;;;;
;;;;;;
An integration test is a test that involves external dependencies (e.g. Database, WebService, Queue, and some would argue FileSystem). A unit test just tests the specific item of code that you want checked. It should run fast (9000 in 45 secs is possible), i.e. most of it should run in memory.
;;;;;;
;;;;;;
2. Convert Integration Tests To Unit Tests
;;;;;;
;;;;;;
If the bulk of your unit tests is smaller than your integration test suite, you have a problem. What this means is that inconsistencies will begin to appear more easily. So from here, start creating more unit tests to replace integration tests. Things you can do to help in this process are:
;;;;;;
;;;;;;
;;;;;;
- Use a mocking framework instead of the real resource. Rspec has an inbuilt mocking framework.
;;;;;;
- Run rcov on your unit test suite. Use that to gauge how thorough your unit test suite is.
;;;;;;
;;;;;;
;;;;;;
Once you have a proper unit test(s) to replace an integration test - remove the integration test. Duplicate testing only makes maintenance worse.
;;;;;;
;;;;;;
3. Don't Use Fixtures
;;;;;;
;;;;;;
Fixtures are evil. Use a factory instead (machinist or factorybot). These systems can build more adaptable graphs of data, and more importantly, they can build in-memory objects which you can use, rather than load things from an external data source.
;;;;;;
;;;;;;
4. Add Checks To Stop Unit Tests Becoming Integration Tests
;;;;;;
;;;;;;
Now that you have faster testing in place, time to put in checks to STOP this from occurring again.
;;;;;;
;;;;;;
There are libraries which monkey patch active record to throw an error when trying to access the database (UnitRecord).
;;;;;;
;;;;;;
You could also try pairing and TDD which can help force your team to write faster tests because:
;;;;;;
;;;;;;
;;;;;;
- Somebody's checking - so nobody gets lazy
;;;;;;
- Proper TDD requires fast feedback. Slow tests just make the whole thing painful.
;;;;;;
;;;;;;
;;;;;;
5. Use Other Libraries To Overcome The Problem
;;;;;;
;;;;;;
Somebody mentioned spork (speeds up load times for the test suite under rails3), hydra/parallel_tests - to run unit tests in parallel (across multiple cores).
;;;;;;
;;;;;;
This should probably be used LAST. Your real problem is all the way in step 1, 2, 3. Solve that and you will be in a better position to role out additional infrastructure.
;;;;;;
"
3663075"",""Test"",""For a great cookbook on improving the performance of your test suite, check out the Grease Your Suite presentation.
";;;;;;
;;;;;;
He documents a 45x speedup in test suite run time by utilizing techniques such as:
;;;;;;
;;;;;;
;;;;;;
"
3663075"",""Test"",""Delete the existing test suite. Will be incredibly effective.
";;;;;;
"
3663075"",""Test"",""faster_require gem might help you.";;;;;;
"Besides that your only way is to (like you did) profile and optimize, or use spork or something that runs your specs in parallel for you. http://ruby-toolbox.com/categories/distributed_testing.html
";;;;;;
"
3663075"",""Test"",""You can follow a few simple tips to first investigate where most of the time is spent if you haven't already tried them. Look at the article below:
";;;;;;
;;;;;;
"https://blog.mavenhive.in/7-tips-to-speed-up-your-webdriver-tests-4f4d043ad581
";;;;;;
;;;;;;
I guess most of these are generic steps which would apply irrespective of the tool used for testing too.
;;;;;;
"
3663075"",""Test"",""Several people have mentioned Hydra above. We have used it with great success in the past. I recently documented the process of getting hydra up and running: http://logicalfriday.com/2011/05/18/faster-rails-tests-with-hydra/
";;;;;;
;;;;;;
I would agree with the sentiment that this kind of technique should not be used as a substitute for writing tests that are well structured and fast by default.
;;;;;;
"
3663075"",""Test"",""If you are using ActiveRecord models, you should also consider the cost of BCrypt encryption.
";;;;;;
;;;;;;
"You can read more about it on this blog post: http://blog.syncopelabs.co.uk/2012/12/speed-up-rspec-test.html
";;;;;;
"
3663075"",""Test"",""10 seconds per example seems like a very long time. I've never seen a spec that took more than one second, and most take far less. Are you testing network connections? Database writes? Filesystem writes?
";;;;;;
;;;;;;
Use mocks and stubs as much as possible - they are much faster than writing code that hits the database. Unfortunately mocking and stubbing also take more time to write (and are harder to do correctly). You have to balance the time spent writing tests vs. the time spent running tests.
;;;;;;
;;;;;;
"I second Andrew Grimm's comment about looking into a CI system which might allow you to parallelize your test suite. For something that size, it might be the only viable solution.
";;;;;;
"
3663075"",""Test"",""You can use Spork. It has support for 2.3.x ,
";;;;;;
;;;;;;
"https://github.com/sporkrb/spork
";;;;;;
;;;;;;
or ./script/spec_server which may work for 2.x
;;;;;;
;;;;;;
Also you can edit the database configuration ( which essentially speeds up the database queries etc ), which will also increase performance for tests.
;;;;;;
"
3663075"",""Test"",""""";;;;;;
"3663075,""Test"",""Speeding up RSpec tests in a large Rails application""";;;;;;
"4444838,""Test"",""I'm in the process of pushing my company towards having unit tests as a major part of the development cycle. I've gotten the testing framework working with our MVC framework, and multiple members of the team are now writing unit tests. I'm at the point, though, where there's work that needs to be done to improve our hourly build, the ease of figuring out what fixtures you need to use, adding functionality to the mock object generator, etc., etc., and I want to be able to make the case for this work to management. In addition, I'd like us to allocate time to write unit tests for the most critical pieces of existing code, and I just don't see that happening without a more specific case than """"everyone knows unit tests are good"""".
";;;;;;
;;;;;;
How do you quantify the positive impact of (comprehensive and reliable) unit tests on your projects? I can certainly look at the number and severity of bugs filed and correlate it with our increases in code coverage, but that's a rather weak metric.
;;;;;;
"
4444838"",""Test"",""Quantification of test-quality is very difficult.
";;;;;;
;;;;;;
"I see code-coverage only as guidance not as test-quality metric. You can literally write test of 100% code-coverage without testing anything (e.g. no asserts are used at all). Also have a look at my blog-post where I warn against metric-pitfalls.
";;;;;;
;;;;;;
The only sensible quantitative metric I know of and which counts for business is really reduced effort of bug-fixes in production-code. Also reduced bug-severity. Still it is very difficult to isolate that unit-tests are the only source of this success (it could also be improvement of process or communication).
;;;;;;
;;;;;;
Generally I would focus on the qualitative approach:
;;;;;;
;;;;;;
;;;;;;
- Do developers feel more comfortable changing code (because tests are a trustworthy safety net)?
;;;;;;
- When bugs occur in production analysis really shows that it was unTestd code (vice versa a minor conlusion that it wouldn't have occurred if there had been unit test)
;;;;;;
;;;;;;
"
4444838"",""Test"",""Sonar is a company that makes a very interesting code inspection tool, they actually try to measure technical debt programaticaly, which correlates unTestd code and developer price per hour.
";;;;;;
"
4444838"",""Test"",""""";;;;;;
"4444838,""Test"",""How do I measure the benefits of unit testing?""";;;;;;
"12526160,""Test"",""How would I mock out the database in my node.js application, which in this case uses mongodb as the backend for a blog REST API ?
";;;;;;
;;;;;;
Sure, I could set the database to a specific testing -database, but I would still save data and not test my code only, but also the database, so I am actually not doing unit testing but integration testing.
;;;;;;
So what should one do? Create database wrappers as a middle layer between application and db and replace the DAL when in testing?
;;;;;;
;;;;;;
// app.js ;;;;;;
var express = require('express');;;;;;
app = express(),;;;;;;
mongo = require('mongoskin'),;;;;;;
db = mongo.db('localhost:27017/test?auto_reconnect');;;;;;
;;;;;;
app.get('/posts/:slug', function(req, res){;;;;;;
db.collection('posts').findOne({slug: req.params.slug}, function (err, post) {;;;;;;
res.send(JSON.stringify(post), 200);;;;;;
});;;;;;
});;;;;;
;;;;;;
app.listen(3000);;;;;;
;;;;;;
;;;;;;
;;;;;;
;;;;;;
// test.js;;;;;;
r = require('requestah')(3000);;;;;;
"describe(""""Does some testing"""", function() {";;;;;;
;;;;;;
" it(""""Fetches a blogpost by slug"""", function(done) {";;;;;;
" r.get(""""/posts/aslug"""", function(res) {";;;;;;
expect(res.statusCode).to.equal(200);;;;;;
" expect(JSON.parse(res.body)[""""title""""]).to.not.equal(null)";;;;;;
return done();;;;;;
});;;;;;
;;;;;;
});;;;;;
));;;;;;
;;;;;;
"
12526160"",""Test"",""There is a general rule of thumb when it comes to mocking which is
";;;;;;
;;;;;;
Don't mock anything you don't own.
;;;;;;
;;;;;;
If you want to mock out the db hide it behing an abstracted service layer and mock that layer. Then make sure you integration test the actual service layer.
;;;;;;
;;;;;;
Personally I've gone away from using mocks for testing and use them for top to bottom design helping me drive development from the top towards the bottom mocking out service layers as I go and then eventually implementing those layers and writing integration tests. Used as a test tool they tend to make your test very brittle and in the worst case leads to a divergence between actual behavior and mocked behavior.
;;;;;;
"
12526160"",""Test"",""I don't think database related code can be properly Testd without testing it with the database software. That's because the code you're testing is not just javascript but also the database query string. Even though in your case the queries look simple you can't rely on it being that way forever.
";;;;;;
;;;;;;
So any database emulation layer will necessarily implement the entire database (minus disk storage perhaps). By then you end up doing integration testing with the database emulator even though you call it unit testing. Another downside is that the database emulator may end up having a different set of bugs compared to the database and you may end up having to code for both the database emulator and the database (kind of like the situation with IE vs Firefox vs Chrome etc.).
;;;;;;
;;;;;;
Therefore, in my opinion, the only way to correctly test your code is to interface it with the real database.
;;;;;;
"
12526160"",""Test"",""I don't agree with the selected answer or other replies so far.
";;;;;;
;;;;;;
Wouldn't it be awesome if you could catch errors spawned by the chaotic and many times messy changes made to DB schemas and your code BEFORE it gets to QA? I bet the majority would shout heck yes!
;;;;;;
;;;;;;
You most certainly can and should isolate and test you DB schemas. And you don't do it based on an emulator or heavy image or recreation of you DB and machine. This is what stuff like SQLite is for just as one example. You mock it based on an in memory lightweight instance running and with static data that does not change in that in memory instance which means you are truly testing your DB in isolation and you can trust your tests as well. And obviously it's fast because it's in memory, a skeleton, and is scrapped at the end of a test run.
;;;;;;
;;;;;;
So yes you should and you should test the SCHEMA that is exported into a very lightweight in memory instance of whatever DB engine/runtime you are using, and that along with adding a very small amount of static data becomes your isolated mocked DB.
;;;;;;
;;;;;;
You export your real schemas from your real DB periodically (in an automated fashion) and import/update those into your light in memory DB instance before every push to QA and you will know instantly if any latest DB changes done by your DB admins or other developers who have changed the schema lately have broken any tests .
;;;;;;
;;;;;;
While I applaud the effort to try your best to answer I would down-vote the current answer if I could but I am new and have not built up enough reputation yet to enable my ability to do so yet.
;;;;;;
;;;;;;
"As for the person who replied with the """"don't mock anything you don't own"""". I think he meant to say """"don't test anything you don't own"""". But you DO mock things you do not own! Because those are the things not under test that need to be isolated!
";;;;;;
;;;;;;
I plan on sharing the HOW with you and will update this post in a future point in time with real example JS code!
;;;;;;
;;;;;;
This is what many test driven teams do all the time. You just have to understand the how.
;;;;;;
"
12526160"",""Test"",""I had this dilemma and chosen to work with a test DB and clean it every time the test begins. (how to drop everything: https://stackoverflow.com/a/25639377/378594)
";;;;;;
;;;;;;
With NPM you can even make a test script that creates the db file and cleans it up after.
;;;;;;
"
12526160"",""Test"",""The purpose of mocking is to skip the complexity and unit test own code. If you want to write e2e tests then use the db.
";;;;;;
;;;;;;
Writing code to setup/teardown a testing DB for unit testing is technical debt and incredibly unsatisfying.
;;;;;;
;;;;;;
There are mock libraries in npm:
;;;;;;
;;;;;;
"mongo - https://www.npmjs.com/package/mongomock
";;;;;;
;;;;;;
"mongoose - https://www.npmjs.com/package/mockgoose
";;;;;;
;;;;;;
If those don't support the features you need, then yes you may need to use the real thing.
;;;;;;
"
12526160"",""Test"",""My preferred approach to unit test DB code in any language is to access Mongo through a Repository abstraction (there's an example here http://iainjmitchell.com/blog/?p=884). Implementations will vary in terms of DB specific functionality exposed but by removing all the Mongo code from your own logic you're in a position to Unit Test. Simply replace the Mongo Repository implementation with a stubbed out version which is trivially easy. For instance, just store objects in a simple in-memory dictionary collection.
";;;;;;
;;;;;;
You'll get the benefits of unit testing your own code this way without DB dependencies but you'll still need to do integration tests against the main DB because you'll probably never be able to emulate the idiosyncrasies of the real database as others have said here. The kind of things I've found are as simple as indexing in safe mode vs without safe mode. Specifically, if you have a unique index your dummy memory implementation might honour that in all cases, but Mongo won't without safe-mode.
;;;;;;
;;;;;;
So whilst you'll still need to test against the DB for some operations, you'll certainly be able to unit test your own logic properly with a stubbed out Repository implementation.
;;;;;;
"
12526160"",""Test"",""""";;;;;;
"12526160,""Test"",""Mocking database in node.js?""";;;;;;
"24367789,""Test"",""We get a java app source code shipped from a partner, but it doesn't include test code.
";;;;;;
;;;;;;
We want to run sonar qube against the code; but against our standard quality profile (PMD/Findbugs etc) technical debt gets skewed by no test coverage. I tried disabling the coverage rules, or setting the coverage ration to 0 but that just killed everything, no issues, no technical debt or useful feedback on the code.
;;;;;
;;;;;;
Can anyone suggest a ruleset or mechanism that would allow us to run a sonar report on the code and retain some of the useful feedback relating to technical debt? Other than writing a new plugin....
;;;;;;
"
24367789"",""Test"",""In the todays sonar configuration there is an option to define where are the coverage test result file. Sonar only reads the file to figure out the coverage.
";;;;;;
;;;;;;
This file is in a default folder. If it don't exists sonar will ignore coverage aspects during the scan. What I did sometimes was just change the default location to some unexistent folder.
;;;;;;
;;;;;;
I will not give here the exact path to find this configuration in sonar because it changes from time to time. However, you should find it very easily.
;;;;;;
"
24367789"",""Test"",""""";;;;;;
"24367789,""Test"",""SonarQube configurable technical debt""";;;;;;
"38563315,""Test"",""I've had a love/hate relationship with testing/tdd my whole career. Recenty I've started to enjoy writing tests by leaving off the assert statements. It's made all the difference in the world for me. Here's why:
";;;;;;
;;;;;;
;;;;;;
speed close to where it was when I was not writing any tests.
;;;;;;
i don't waste time trying to make assert(foo, 2) or !assert(foo, nil) logic at the end of each test
;;;;;;
I just puts foo.inspect at the end of the test, run it, and move on when it's working
;;;;;;
the next programmer still has a wonderful little test that shows my intent and knows this code was at one point working or it wouldn't exist.
;;;;;;
there's no breaking the build when tests fails because without asserts tests never fail.
;;;;;;
tests are not run 24/7 over and over to catch something. They are there when you want to debug some code and leave very nice notes to the next programmer (maybe you)
;;;;;;
there's no technical debt to pay down as years go by and tests break. The tests are always just there as archaeological relics of code that puts some useful information to the console at some point in time.
;;;;;;
;;;;;;
;;;;;;
My question is, is this like a known style of testing? Because I just found it out of necessity. But are TDD people using this sytem?
;;;;;;
"
38563315"",""Test"",""One of the great advantages of unit tests is that once they are written they can be run automatically hundreds or thousands of time with no additional effort.
";;;;;;
;;;;;;
It is this that makes continuous integration so powerful. Automate the running of your tests and then run them frequently. That way when you add new code or refactor you get quick feedback if existing code has been broken.
;;;;;;
;;;;;;
The presence of good automated unit test coverage removes some of the fear of changing code. This encourages more frequent refactoring and often results in a better quality code base.
;;;;;;
;;;;;;
If you write unit tests that have to be run manually then you lose this advantage.
;;;;;;
"
38563315"",""Test"",""";;;;;;
- speed is relative :) you are waisting time on debugging
;;;;;;
- don't write assert if you dont' want to. test only what is needed
;;;;;;
- no oppinion on this matter i have no clue what .inspect is
;;;;;;
- does it or is that what comments are for
;;;;;;
imho a test should fail at start and you write code so it doesn't fail
;;;;;;
;;;;;;
$returnVar = myClass->methodReturnsTrue();;;;;
$this->assertTrue($returnVar);;;;;
;;;;;;
;;;;;;
if you run the test without programming anything it will fail (this is the first step)
;;;;;;
;;;;;;
now write the code that make it work
;;;;;;
;;;;;;
class myClass;;;;;;
{;;;;;;
public function methodReturnsTrue();;;;;;
{;;;;;;
return true;;;;;;
};;;;;;
};;;;;;
;;;;;;
;;;;;;
It is now fixed. the test runs and your code is Testd. you can now run it over and over and over again. without it failing
;;;;;;
you don't have to run test 24/7 but only on code changes (new features or bug fixes) use CI for that.
;;;;;;
- the relics make sure that your code still does what it does 20 years ago. Ofcourse on refactoring you have to rethink about the test. but in the end they make sure you don't break a feature that you have forgotten or never new that it was there in the first place
;;;;;;
;;;;;;
"
38563315"",""Test"",""""";;;;;;
"38563315,""Test"",""how do people struggling with TDD feel about leaving out the asserts?""";;;;;;
"52152618,""Test"",""I try to implement uni tests for our C++ legacy code base. I read through Michael Feathers """"Working effectively with legacy code"""" and got some idea how to achieve my goal. I use GooleTest/GooleMock as a framework and already implemented some first tests involving mock objects.
";;;;;;
;;;;;;
"To do that, I tried the """"Extract interface"""" approach, which worked quite well in one case:
";;;;;;
;;;;;;
class MyClass;;;;;;
{;;;;;;
...;;;;;;
void MyFunction(std::shared_ptr<MyOtherClass> parameter);;;;
};;;;;;
;;;;;;
;;;;;;
became:
;;;;;;
;;;;;;
class MyClass;;;;;;
{;;;;;;
...;;;;;;
void MyFunction(std::shared_ptr<IMyOtherClass> parameter);;;;
};;;;;;
;;;;;;
;;;;;;
and I passed a ProdMyOtherClass in production and a MockMyOtherClass in test. All good so far.
;;;;;;
;;;;;;
But now, I have another class using MyClass like:
;;;;;;
;;;;;;
class WorkOnMyClass;;;;;;
{;;;;;;
...;;;;;;
void DoSomeWork(std::shared_ptr<MyClass> parameter);;;;
};;;;;;
;;;;;;
;;;;;;
If I want to test WorkOnMyClass and I want to mock MyClass during that test, I have to extract interface again. And that leads to my question, which I couldn't find an answer to so far: how would the interface look like? My guess is, that it should be all abstract, so:
;;;;;;
;;;;;;
class IMyClass;;;;;;
{;;;;;;
...;;;;;;
virtual void MyFunction(std::shared_ptr<IMyOtherClass> parameter) = 0;;;;
};;;;;;
;;;;;;
;;;;;;
That leaves me with three files for every class: all virtual base interface class, production implementation using all production parameters and mock implementation using all mock parameters. Is this the correct approach?
;;;;;;
;;;;;;
I only found simple examples, where function parameters are primitives, but not classes, which in turn need tests themselves (and may therefore require interfaces).
;;;;;;
"
52152618"",""Test"",""The first point to keep in mind is that there probably is no one way that's right and the others wrong--any answer is a matter of opinion as much as fact (though the opinions can be informed by fact).
";;;;;;
;;;;;;
That said, I'd urge at least a little caution against the use of inheritance for this case. Most such books/authors are oriented pretty heavily toward Java, where inheritance is treated as the Swiss army knife (or perhaps Leatherman) of techniques, used for every task where it might sort of come close to making a little sense, regardless of whether its really the right tool for the job or not. In C++, inheritance tends to be viewed much more narrowly, used only when/if/where there's nearly no alternative (and the alternative is to hand-roll what's essentially inheritance on your own anyway).
;;;;;;
;;;;;;
The primary unique feature of inheritance is run-time polymorphism. For example, we have a collection of (pointers to) objects, and the objects in the collection aren't all the same type (but are all related via inheritance). We use virtual functions to provide a common interface to the objects of the various types.
;;;;;;
;;;;;;
At least as I read things, that's not the case here at all though. In a given build, you'll deal with either mock objects or production objects, but you'll always know at compile time whether the objects in use are mock or production--you won't ever have a collection of a mixture of mock objects and production objects, and need to determine at run time whether a particular object is mock or production.
;;;;;;
;;;;;;
Assuming that's correct, inheritance is almost certainly the wrong tool for the job. When you're dealing with static polymorphism (i.e., the behavior is determined at compile time) there are better tools (albeit, ones Feather and company apparentlyy feel obliged to ignore, simply because Java fails to provide them).
;;;;;;
;;;;;;
In this case, it's pretty trivial to handle all the work at build time, without polluting your production code with any extra complexity at all. For one example, you can create a source directory with mock and production subdirectories. In the mock directory you have foo.cpp, bar.cpp and baz.cpp that implement the mock versions of classes Foo, Bar and Baz respectively. In the production directory you have production versions of the same. At build time, you tell the build tool whether to build the production or mock version, and it chooses the directory where it gets the source code based on that.
;;;;;;
;;;;;;
Semi-unrelated aside
;;;;;;
;;;;;;
"I also note that you're using a shared_ptr as a parameter. This is yet another huge red flag. I find uses for shared_ptr to be exceptionally rare. The vast majority of times I've seen it used, it wasn't really what should have been used. A shared_ptr is intended for cases of shared ownership of an object--but most use seems to be closer to """"I haven't bothered to figure out the ownership of this object"""". The shared_ptr isn't all that huge of a problem in itself, but it's usually a symptom of larger problems.
";;;;;;
"
52152618"",""Test"",""TLDR in bold
";;;;;;
;;;;;;
"As Jeffery Coffin has already pointed out, there is no one right way to do what you're seeking to accomplish. There is no """"one-size fits all"""" in software, so take all these answers with a grain of salt, and use your best judgement for your project and circumstances. That being said, here's one potential alternative:
";;;;;;
;;;;;;
Beware of mocking hell:
;;;;;;
;;;;;;
The approach you've outlined will work: but it might not be best (or it might be, only you can decide). Typically the reason you're tempted to use mocks is because there's some dependency you're looking to break. Extract Interface is an okay pattern, but it's probably not resolving the core issue. I've leaned heavily on mocks in the past and have had situations where I really regret it. They have their place, but I try to use them as infrequently as possible, and with the lowest-level and smallest possible class. You can get into mocking hell, which you're about to enter since you have to reason about your mocks having mocks. Usually when this happens its because there's a inheritance/composition structure and the base/children share a dependency. If possible, you want to refactor so that the dependency isn't so heavily ingrained in your classes.
;;;;;;
;;;;;;
"Isolating the """"real"""" dependency:
";;;;;;
;;;;;;
A better pattern might be Parameterize Constructor (another Michael Feathers WEWLC pattern).
;;;;;;
;;;;;;
WLOG, lets say your rogue dependency is a database (maybe it's not a database, but the idea still holds). Maybe MyClass and MyOtherClass both need access to it. Instead of Extracting Interface for both of these classes, try to isolate the dependency and pass it in to the constructors for each class.
;;;;;;
;;;;;;
Example:
;;;;;;
;;;;;;
class MyClass {;;;;;;
public:;;;;;;
MyClass(...) : ..., db(new ProdDatabase()) {};" // Old constructor, but give it a """"default"""" database now";;;;;
MyClass(..., Database* db) : ..., db(db) {}; // New constructor;;;;;
...;;;;;;
private:;;;;;;
Database* db; // Decide on semantics about owning a database object, maybe you want to have the destructor of this class handle it, or maybe not;;;;;
// MyOtherClass* moc; //Maybe, depends on what you're trying to do;;;;;
};;;;;;
;;;;;;
;;;;;;
and
;;;;;;
;;;;;;
class MyOtherClass {;;;;;;
public:;;;;;;
// similar to above, but you might want to disallow this constructor if it's too risky to have two different dependency objects floating around.;;;;;;
MyOtherClass(...) : ..., db(new ProdDatabase());;;;;;
MyOtherClass(..., Database* db) : ..., db(db);;;;;;
private:;;;;;;
Database* db; // Ownership?;;;;;
};;;;;;
;;;;;;
;;;;;;
And now that we see this layout, it makes us realize that you might even want MyOtherClass to simply be a member of MyClass (depends what you're doing and how they're related). This will avoid mistakes in instantiating MyOtherClass and ease the burden of the dependency ownership.
;;;;;;
;;;;;;
Another alternative is to make the Database a singleton to ease the burden of ownership. This will work well for a Database, but in general the singleton pattern won't hold for all dependencies.
;;;;;;
;;;;;;
Pros:
;;;;;;
;;;;;;
;;;;;;
- Allows for clean (standard) dependency injection, and it tackles the core issue of isolating the true dependency.
;;;;;;
- Isolating the real dependency makes it so that you avoid mocking hell and can just pass the dependency around.
;;;;;;
- Better future proofed design, high reusability of the pattern, and likely less complex. The next class that needs the dependency won't have to mock themselves, instead they just rope in the dependency as a parameter.
;;;;;;
;;;;;;
;;;;;;
Cons:
;;;;;;
;;;;;;
;;;;;;
- This pattern will probably take more time/effort than Extract Interface. In legacy systems, sometimes this doesn't fly. I've committed all sorts of sins because we needed to move a feature out...yesterday. It's okay, it happens. Just be aware of the design gotchas and technical debt you accrue...
;;;;;;
- It's also a bit more error prone.
;;;;;;
;;;;;;
;;;;;;
Some general legacy tips I use (the things WEWLC doesn't tell you):
;;;;;;
;;;;;;
"Don't get hell-bent about avoiding a dependency if you don't need to avoid it. This is especially true when working with legacy systems where refactorings are risky in general. Instead, you can have your tests call an actual database (or whatever the dependency is), but have the test suite connect to a small """"test"""" database instead of the """"prod"""" database. The cost of standing up a small test db is usually quite small. The cost of crashing prod because you goofed up a mock or a mock fell out of sync with reality is typically a lot higher. This will also save you a lot of coding.
";;;;;;
;;;;;;
Avoid mocks (especially heavy mocking) where possible. I am becoming more and more convinced as I age as a software engineer that mocks are mini-design smells. They are the quick and dirty: but usually illustrate a larger problem.
;;;;;;
;;;;;;
Envision the ideal API, and try to build what you envision. You can't actually build the ideal API, but imagine you can refactor everything instantly and have the API you desire. This is a good starting point for improving a legacy system, and make tradeoffs/sacrifices with your current design/implementation as you go.
;;;;;;
;;;;;;
HTH, good luck!
;;;;;;
"
52152618"",""Test"",""""";;;;;;
"52152618,""Test"",""In a C++ unit test context, should an abstract base class have other abstract base classes as function parameters?""";;;;;;
"52868909,""Test"",""";;;;;;
We want to add 100% coverage on our Spring Boot Java program we run so there are 2;;;;;;
strange tests that require us to create a folder and a file with access;;;;;;
permission denied .
;;;;;;
;;;;;;
;;;;;;
For the file was pretty straight forward :
;;;;;;
;;;;;;
File file = new file(....path...);;;;;;
.... just creating the file with some simple code...;;;;;;
//now deny permissions;;;;;;
file.setReadable(false);;;;;;
file.setWritable(false);;;;;;
;;;;;;
//Some code trying to write on that file;;;;;;
throwing exception (happy junit test is passed);;;;;;
;;;;;;
;;;;;;
But the i want to create a directory let's name it parentDir and make it impossible the same java program can create files or folders inside it.So in the same logic :
;;;;;;
;;;;;;
File parentDir= new file(....parentDirPath...);;;;;;
parentDir.mkDir();;;;;;
parentDir.setReadable(false);;;;;;
parentDir.setWritable(false);;;;;;
;;;;;;
//Some code to create another folder inside the parentDir;;;;;;
File childDir= new file(....parentDirPath/childDir...);;;;;;
directoryExistsOrElseCreate(childDir.toPath());;;;;;
;;;;;;
//WHOT IT CREATES THE FOLDER even if i don't want ...;;;;;;
;;;;;;
;;;;;;
Why it is still able to create new files and folders on the parentDir?
;;;;;;
;;;;;;
;;;;;;
;;;;;;
Update , just digged into the code and found out that we are using Files.createDirectories(path); instead of file.mkDir();
;;;;
;;;;;;
The method i am testing is this :
;;;;;;
;;;;;;
public void directoryExistsOrElseCreate(final Path path) {;;;;;;
;;;;;;
try {;;;;;;
if (Files.notExists(path)) {;;;;;;
" log.warn(""""Directory={} does NOT EXIST, creating..."""", path)";;;;;;
Files.createDirectories(path);;;;;;
} else {;;;;;;
" log.warn(""""Directory={} ALREADY EXISTS, skipping..."""", path)";;;;;;
};;;;;;
} catch (final IOException e) {;;;;;;
" log.error(""""Error during creating directory: path={}, error={}"""", path.toString(), e.getMessage())";;;;;;
throw new AtsGenericException(AtsGenericErrorCode.IO_ERROR, new Object[]{e.getMessage()});;;;;;
};;;;;;
};;;;;;
;;;;;;
"
52868909"",""Test"",""According to the Javadoc, in your scenario, mkdir() should return false, and not throw an exception.
";;;;;;
"
52868909"",""Test"",""I have only one answer for you: don't waste your energy getting 100% code coverage. There are plenty of situations where it's not worth the effort and thus a waste of time and money. Moreover, convoluted tests to cover hard-to-test things like private constructors, exceptions from utility classes or direct interactions with the system (file, network etc) are hard to understand and maintain. They only add technical debt to your application without adding any value at all.
";;;;;;
;;;;;;
Don't write separate unit tests for anemic model classes either (i.e. POJO's with only fields, getters and setters and no logic). These classes should be used elsewhere and covered as part of other tests.
;;;;;;
;;;;;;
Set a goal in the range 60-80% and focus your test writing efforts on your business and transformation logic, e.g. controllers, mappers, services. This is the logic that is hard to understand, changes most often and determines the functionality of your application.
;;;;;;
;;;;;;
To cover misconfigurations or bugs in any uncovered code, write some basic integration (end-to-end) tests using stubbed environment components like an H2 database, or a WireMock endpoint with a canned response. These will show if your system fails to interact at a basic level much better than any cooked up reflection ridden unit tests can.
;;;;;;
"
52868909"",""Test"",""""";;;;;;
"52868909,""Test"",""Java mkdir() folder in which you can't create subfolders or files ( throw access denied)""";;;;;;
"54643532,""Test"",""I'm trying to create Unit Test. I have class User:
";;;;;;
;;;;;;
public class User;;;;;;
{;;;;;;
public int UsersCount;;;;;;
{;;;;;;
get;;;;;;
{;;;;;;
using (MainContext context = new MainContext());;;;;;
{;;;;;;
return context.Users.Count();;;;;;
};;;;;;
};;;;;;
};;;;;;
public Guid Id { get; set; } = Guid.NewGuid();;;;
public string UserName { get; set; };;;;
public string Password { get; set; };;;;
public Contact UserContact { get; set; };;;;
};;;;;;
;;;;;;
;;;;;;
My first test is UsersCount_Test test which tests UsersCount property:
;;;;;;
;;;;;;
[TestMethod];;;;;;
public void UsersCount_Test();;;;;;
{;;;;;;
var user = new User();;;;;;
var context = new MainContext();;;;;;
int usersCount = context.Users.Count();;;;;;
context.Users.Add(new User());;;;;;
context.SaveChanges();;;;;;
" Assert.AreEqual(usersCount + 1, user.UsersCount, $""""It should be {usersCount + 1} because we're adding one more user"""")";;;;;;
};;;;;;
;;;;;;
;;;;;;
If I add new test method in my test class (I'm using separate classes for testing each entity), I need to create new instance of User. That's why I did this:
;;;;;;
;;;;;;
public class BaseTest<T>;;;;
{;;;;;;
public T Testntity;;;;;;
;;;;;;
public MainContext TestContext = new MainContext();;;;;;
};;;;;;
;;;;;;
;;;;;;
Now each test classes inherits from this class. And also I created test initializer method. Now my test class looks like this :
;;;;;;
;;;;;;
[TestClass];;;;;;
public class UserTest : BaseTest<User>;;;;
{;;;;;;
[TestMethod];;;;;;
public void UsersCount();;;;;;
{;;;;;;
int usersCount = TestContext.Users.Count();;;;;;
TestContext.Users.Add(new User());;;;;;
TestContext.SaveChanges();;;;;;
" Assert.AreEqual(usersCount + 1, Testntity.UsersCount, $""""It should be {usersCount + 1} because we're adding one more user"""")";;;;;;
};;;;;;
;;;;;;
[TestInitialize];;;;;;
public void SetTestntity();;;;;;
{;;;;;;
Testntity = new User();;;;;;
};;;;;;
};;;;;;
;;;;;;
;;;;;;
Now I'm adding new property to User and writing some logic:
;;;;;;
;;;;;;
string phoneNumber;;;;;;
public string PhoneNumber { get { return phoneNumber; } set { SetUserContact(phoneNumber, value); phoneNumber = value; } };;;
;;;;;;
void SetUserContact(string oldContact, string newContact);;;;;;
{;;;;;;
UserContact.ContactsList.Remove(oldContact);;;;;;
UserContact.ContactsList.Add(newContact);;;;;;
};;;;;;
;;;;;;
;;;;;;
After that I'm creating new test :
;;;;;;
;;;;;;
[TestMethod];;;;;;
public void ContactList_Test();;;;;;
{;;;;;;
" var newPhone = """"+8888888888888""""";;;;;;
Testntity.PhoneNumber = newPhone;;;;;;
Assert.IsTrue(Testntity.UserContact.ContactsList.Any(a =>" a == newPhone), $""""It should contains {newPhone}"""")";;;;;
};;;;;;
;;;;;;
;;;;;;
Test fails because UserContact of Testntity is null. I understood that Testntity should be created by logic. After that I fix test initilizer method:
;;;;;;
;;;;;;
[TestInitialize];;;;;;
public void SetTestntity();;;;;;
{;;;;;;
Testntity = new User() { UserContact = new Contact() };;;;;;
};;;;;;
;;;;;;
;;;;;;
Here is Contact model
;;;;;;
;;;;;;
public class Contact;;;;;;
{;;;;;;
public Guid Id { get; set; } = Guid.NewGuid();;;;
public virtual List<string> ContactsList { get; set; } = new List<string>()
};;;;;;
;;;;;;
;;;;;;
My question is how to set Testntity only one time, is it possible (maybe get it in memory and use it when it calls SetTestntity method)? Because SetTestntity method creates a new entity in each test and it takes more development time. (For example, If creating an instance of UserContact takes 3 seconds all, test runs more than 3 seconds). Another way, in this case, is to set UserContact in ContactLists test, but I think it's not a good idea. In the future when we will add new logics, I need to fix each test. Please give me any suggestion and/or ideas.
;;;;;;
"
54643532"",""Test"",""TestInitialize and TestCleanup are ran before and after each test, this is to ensure that no tests are coupled.
";;;;;;
;;;;;;
If you want to run methods before and after ALL tests only once, decorate relevant methods with the ClassInitialize and ClassCleanup attributes.
;;;;;;
;;;;;;
You can use the following additional attributes as you write your tests:
;;;;;;
;;;;;;
Sample code-
;;;;;;
;;;;;;
"";;;;;;
"
";;;;;;
"
// Use ClassInitialize to run code before running the first test in the class";;;;;;
[ClassInitialize()];;;;;;
public static void MyClassInitialize(TestContext testContext) { };;;;;;
;;;;;;
// Use ClassCleanup to run code after all tests in a class have run;;;;;;
[ClassCleanup()];;;;;;
public static void MyClassCleanup() { };;;;;;
;;;;;;
// Use TestInitialize to run code before running each test ;;;;;;
[TestInitialize()];;;;;;
public void MyTestInitialize() { };;;;;;
;;;;;;
// Use TestCleanup to run code after each test has run;;;;;;
[TestCleanup()];;;;;;
public void MyTestCleanup() { }
;;;;;;
;;;;;;
;;;;;;
;;;;;;
;;;;;;
Basically you can have your SetEntity method in your ClassIntialize method;;;;;;
Hope it helps.
;;;;;;
"
54643532"",""Test"",""If you would really have to TestInitialize runs before each test. You could use ClassInitialize to run test initialization for class only once.
";;;;;;
;;;;;;
BUT
;;;;;;
;;;;;;
From what I'm seeing your performance issue is caused by desing and architecutre of your application where you are breaking single responsibility principle. Creating static database entity or sharing it across test is not a solution it is only creating more technical debt. Once you share anything across test it has to be maintained acorss test AND by definition unit test SHOULD run separately and independently to allow testing each scenarion with fresh data.
;;;;;;
;;;;;;
You shouldn't be creating database models that depend on MainContext. Should single User really know how many Users there are in the database? If not then please create separate repository that will have MainContext injected and method GetUsersCount() and unit test that with InMemoryDatabase by adding few users calling specific implementation and checking if correct number of users has been added, like following:
;;;;;;
;;;;;;
public interface IUsersRepository;;;;;;
{;;;;;;
int GetUsersCount();;;;;;
};;;;;;
;;;;;;
public class UsersRepository : IUsersRepository;;;;;;
{;;;;;;
private readonly EntityFrameworkContext _context;;;;;;
;;;;;;
public UsersRepository(EntityFrameworkContext context);;;;;;
{;;;;;;
_context = context;;;;;;
};;;;;;
;;;;;;
public int GetUsersCount();;;;;;
{;;;;;;
return _context.Users.Count();;;;;;
};;;;;;
};;;;;;
;;;;;;
;;;;;;
Later only methods that are really using context should be Testd withInMemoryDatabase and for methods that are making use of IUserRepository each specific method should be mocked since it is Testd separatly.
;;;;;;
"
54643532"",""Test"",""""";;;;;;
"54643532,""Test"",""How to get test initialize in memory and use in each test""";;;;;;
"55671363,""Test"",""Top Level Problem
";;;;;;
;;;;;;
Our team has inherited a very large and brittle python 2 (and C,C++, few others) codebase that is very difficult and costly to update. Tons of dependencies. Very few tests. Adding behavior improvement and converting to python 3 both have appeared to be monumental tasks. Even making small changes for a new release we've had to revert many times as it's broken something.
;;;;;;
;;;;;;
It's a story of insufficient testing and its major technical debt.
;;;;;;
;;;;;;
Still, the project is so big and helpful, that it seems a no brainer to update it than re-invent everything it does.
;;;;;;
;;;;;;
Sub Problem
;;;;;;
;;;;;;
How to add a massive amount of missing small tests. How can we automatically generate even simple input/output acceptance unit tests from the high level user acceptance tests?
;;;;;;
;;;;;;
Attempted Solution
;;;;;;
;;;;;;
There are about 50 large high level behavioral tests that this codebase needs to handle. Unfortunately, it takes days to run them all, not seconds. These exercise all the code we care the most about, but they are just too slow. (Also a nerdy observation, 80% of the same code is exercised in each one). Is there a way to automatically generate the input/output unit tests from automatic stack examination while running these?
;;;;;;
;;;;;;
In other words, I have high level tests, but I would like to automatically create low level unit and integration tests based on the execution of these high level tests.
;;;;;;
;;;;;;
Mirroring the high level tests with unit tests does exactly zero for added code coverage, but what it does do is make the tests far faster and far less brittle. It will allow quick and confident refactoring of the pieces.
;;;;;;
;;;;;;
"I'm very familiar with using TDD to mitigate this massive brittle blob issue in the first place as it actually speeds up development in a lot of cases and prevents this issue, but this is a sort of unique beast of a problem to solve as the codebase already exists and """"works"""" ";).
;;;;;
;;;;;;
Any automated test tool tips? I googled around a lot, and I found some things that may work for C, but I can't find anything for python to generate pytests/unittest/nose or whatever. I don't care what python test framework it uses (although would prefer pytest). I must be searching the wrong terms as it seems unbelievable a test generation tool doesn't exist for python.
;;;;;;
"
55671363"",""Test"",""I've taken a very lazy and practical imperfect solution, and it took me about 40hrs: 20 of which was wrapping my head around the C part enough to write unit tests for it and fix it, which amounted to about 30 lines -- the other 20 was fixing mostly trivial bytes/strings issues that futurize couldn't possbly handle and setting up CI.
";;;;;;
;;;;;;
;;;;;;
- Run futurize
;;;;;;
- Run the most desirable use case as an E2E test and fix issues, complete with new critical unit tests
;;;;;;
- CI w/ tox on 2.7/3.x for these
;;;;;;
;;;;;;
;;;;;;
End result is an unchanged 2.7 codebase and a minimally working beta 3.7 codebase, the long tail 3.7 support for secondary use cases to be solved over time, see Dirk's long term answer.
;;;;;;
"
55671363"",""Test"",""First, good you have already some higher level test running. Parallelize their execution, run each on different hardware, buy faster hardware if possible - as the refactoring task you are about to handle seems to be huge, this will still be the cheapest way of doing it. Consider breaking down these higher level tests into smaller ones.
";;;;;;
;;;;;;
Second, as lloyd has mentioned, for the components you plan to refactor, identify the component's boundaries and during execution of the higher level tests record input and output values at the boundaries. With some scripting, you may be able to transform the recorded values into a starting point for unit-test code. In only rare cases this will end up being useful unit-tests immediately: Normally you will need to do some non-trivial architectural analysis and probably re-design:
;;;;;;
;;;;;;
;;;;;;
- What should be the units to be Testd? Single methods, groups of methods, groups of classes? For example, setter methods can not sensibly be Testd without other methods. Or, to test any method, first a constructed object will have to exist, and thus some call to the constructor will be needed.
;;;;;;
- What are the component's boundaries? What are the depended-on-components? With which of the depended-on-components can you just live, which would need to be mocked? Many components can just be used as they are - you would not mock math functions like sin or cos, for example.
;;;;;;
- What are the boundaries between unit-tests, that is, at which points in the long-running tests would you consider a unit-test to start and end? Which part of the recording is considered setup, which execution, which verification?
;;;;;;
;;;;;;
;;;;;;
All these difficulties explain to me, why some generic tooling may be hard to find and you will probably be left to specifically created scripts for test code generation.
;;;;;;
"
55671363"",""Test"",""""";;;;;;
"55671363,""Test"",""Automatic low level testing generation based on high level desired behavioral examples""";;;;;;
"5880,""No"",""I'm pretty new to my company (2 weeks) and we're starting a new platform for our system using .NET 3.5 Team Foundation from DotNetNuke. Our """"architect"""" is suggesting we use one class project. Of course, I chime back with a """"3-tier"""" architecture (Business, Data, Web class projects).
";;;;;;
;;;;;;
Is there any disadvantages to using this architecture? Pro's would be separation of code from data, keeping class objects away from your code, etc.
;;;;;;
"
5880"",""No"",""As with anything abstraction creates complexity, and so the complexity of doing N-tiered should be properly justified, e.g., does N-tiered actually benefit the system? There will be small systems that will work best with N-tiered, although a lot of them will not.
";;;;;;
;;;;;;
Also, even if your system is small at the moment, you might want to add more features to it later -- not going N-tiered might consitute a sort of technical debt on your part, so you have to be careful.
;;;;;;
"
5880"",""No"",""The only disadvantage is complexity but really how hard is it to add some domain objects and bind to a list of them as opposed to using a dataset. You don't even have to create three seperate projects, you can just create 3 seperate folders within the web app and give each one a namespace like, YourCompany.YourApp.Domain, YourCompany.YourApp.Data, etc.
";;;;;;
;;;;;;
The big advantage is having a more flexible solution. If you start writing your app as a data centric application, strongly coupling your web forms pages to datasets, you are going to end up doing a lot more work later migrating to a more domain centeric model as your business logic grows in complexity.
;;;;;;
;;;;;;
Maybe in the short term you focus on a simple solution by creating very simple domain objects and populating them from datasets, then you can add business logic to them as needed and build out a more sophisticated ORM as needed, or use nhibernate.
;;;;;;
"
5880"",""No"",""Because you want the capability of being able to distribute the layers onto different physical tiers (I always use """"tier"""" for physical, and """"layer"""" for logical), you should think twice before just putting everything into one class because you've got major refactorings to do if or when you do need to start distributing.
";;;;;;
"
5880"",""No"",""I would be pushing hard for the N tiered approach even if it's a small project. If you use an ORM tool like codesmith + nettiers you will be able to quickly setup the projects and be developing code that solves your business problems quickly.
";;;;;;
;;;;;;
"It kills me when you start a new project and you spend days sitting around spinning wheels talking about how the """"architecture"""" should be architected. You want to be spending time solving the business problem, not solving problems that other people have solved for you. Using an ORM (it doesn't really matter which one, just pick one and stick to it) to help you get initial traction will help keep you focussed on the goals of the project and not distract you trying to solve """"architecture"""" issues.
";;;;;;
;;;;;;
If, at the end of the day, the architect wants to go the one project approach, there is no reason you can't create an app_code folder with a BLL and DAL folder to seperate the code for now which will help you move to an N-Tiered solution later.
;;;;;;
"
5880"",""No"",""it tends to take an inexperienced team longer to build 3-tier.It's more code, so more bugs. I'm just playing the devil's advocate though.
";;;;;;
"
5880"",""No"",""I guess a fairly big downside is that the extra volume of code that you have to write, manage and maintain for a small project may just be overkill.
";;;;;;
;;;;;;
It's all down to what's appropriate for the size of the project, the expected life of the final project and the budget! Sometimes, whilst doing things 'properly' is appealing, doing something a little more 'lightweight' can be the right commercial decision!
;;;;;;
"
5880"",""No"",""""";;;;;;
"5880,""No"",""Are there any negative reasons to use an N-Tier solution?""";;;;;;
"310010,""No"",""There are (at least) two ways that technical debts make their way into projects. The first is by conscious decision. Some problems just are not worth tackling up front, so they are consciously allowed to accumulate as technical debt. The second is by ignorance. The people working on the project don't know or don't realize that they are incurring a technical debt. This question deals with the second. Are there technical debts that you let into your project that would have been trivial to keep out (""""If I had only known..."""") but once they were embedded in the project, they became dramatically more costly?
";;;;;;
"
310010"",""No"",""Not having a cohesive design up front tends to lead to it. You can overcome it to a degree if you take the time to refactor frequently, but most people keep bashing away at an overall design that does not match their changing requirements. This may be a more general answer that what your looking for, but does tend to be one of the more popular causes of technical debt.
";;;;;;
"
310010"",""No"",""At a previous company they used and forced COM for stuff it wasn't needed for.
";;;;;;
;;;;;;
Another company with a C++ codebase didn't allow STL. (WTF?!)
;;;;;;
;;;;;;
Another project I was on made use of MFC just for the collections - No UI was involved. That was bad.
;;;;;;
;;;;;;
The ramifications of course for those decisions were not great. In two cases we had dependencies on pitiful MS technologies for no reason and the other forced people to use worse implementations of generics and collections.
;;;;;;
;;;;;;
"I classify these as """"debt"""" since we had to make decisions and trade-offs later on in the projects due to the idiotic decisions up front. Most of the time we had to work around the shortcomings.
";;;;;;
"
310010"",""No"",""The cliche is that premature optimization is the root of all evil, and this certainly is true for micro-optimization. However, completely ignoring performance at a design level in an area where it clearly will matter can be a bad idea.
";;;;;;
"
310010"",""No"",""While not everyone may agree, I think that the largest contributor to technical debt is starting from the interface of any type of application and working down in the stack. I have come to learn that there is less chance of deviation from project goals by implementing a combination of TDD and DDD, because you can still develop and test core functionality with the interface becoming the icing.
";;;;;;
;;;;;;
"Granted, it isn't a technical debt in itself, but I have found that top-down development is more of an open doorway that is inviting to decisions that are not well thought out - all for the sake of doing something the """"looks cool"""". Also, I understand that not everyone will agree or feel the same way about it, so your mileage might vary on this one. Team dynamics and skills are a part of this equation, as well.
";;;;;;
"
310010"",""No"",""I really struggle with this one, trying to balance YAGNI versus """"I've been burned on this once too often""""
";;;;;;
;;;;;;
My list of things I review on every application:
;;;;;;
;;;;;;
;;;;;;
- Localization:;;;;;;
;;;;;;
"
";;;;;;
- Is Time Zone ever going to be important? If yes, persist date/times in UTC.
;;;;;;
- Are messages/text going to be localized? If yes, externalize messages.
;;;;;;
;;;;;;
- Platform Independence? Pick an easily ported implementation.
;;;;;;
;;;;;;
;;;;;;
Other areas where technical debt can be incurred include:
;;;;;;
;;;;;;
;;;;;;
- Black-Hole Data collection: Everything goes in, nothing ever goes out. (No long-term plan for archiving/deleting old data)
;;;;;;
- Failure to keep MVC or tiers cleanly separated over the application lifetime - for example, allowing too much logic to creep into the View, making adding an interface for mobile devices or web services much more costly.
;;;;;;
;;;;;;
;;;;;;
I'm sure there will be others...
;;;;;;
"
310010"",""No"",""Scalability - in particular data-driven business applications. I've seen more than once where all seems to run fine, but when the UAT environment finally gets stood up with database table sizes that approach productions, then things start falling down right and left. It's easy for an online screen or batch program to run when the db is basically holding all rows in memory.
";;;;;;
"
310010"",""No"",""Not starting a web project off using a javascript framework and hand implementing stuff that was already available. Maintaining the hand written javascript became enough of a pain that I ended up ripping it all out and redoing it with with the framework.
";;;;;;
"
310010"",""No"",""Unit Testing -- I think that failing to write tests as you go incurs a HUGE debt that is hard to make up. Although I am a fan of TDD, I don't really care if you write your tests before or after you implement the code... just as long as you keep your tests synced with your code.
";;;;;;
"
310010"",""No"",""One example of this is running a database in a mode that does not support Unicode. It works right up until the time that you are forced to support Unicode strings in your database. The migration path is non-trivial, depending on your database.
";;;;;;
;;;;;;
For example, SQL Server has a fixed maximum row length in bytes, so when you convert your columns to Unicode strings (NCHAR, NVARCHAR, etc.) there may not be enough room in the table to hold the data that you already have. Now, your migration code must make a decision about truncation or you must change your table layout entirely. Either way, it's much more work than just starting with all Unicode strings.
;;;;;;
"
310010"",""No"",""Storing dates in a database in local timezone. At some point, your application will be migrated to another timezone and you'll be in trouble. If you ever end up with mixed dates, you'll never be able to untangle them. Just store them in UTC.
";;;;;;
"
310010"",""No"",""Ignoring security problems entirely.
";;;;;;
;;;;;;
Cross-site scripting is one such example. It's considered harmless until you get alert('hello there!') popping up in the admin interface (if you're lucky - script may as well silently copy all data admins have access to, or serve malware to your customers).
;;;;;;
;;;;;;
And then you need 500 templates fixed yesterday. Hasty fixing will cause data to be double-escaped, and won't plug all vulnerabilities.
;;;;;;
"
310010"",""No"",""""";;;;;;
"310010,""No"",""Are there specific """"technical debts"""" that are not worth incurring?""";;;;;;
"401541,""No"",""I've been tasked with converting an existing ASP.NET site from using InProc session management to using the ASP.NET State Server.
";;;;;;
;;;;;;
Of course what this means is that anything stored in the Session must be serializable.
;;;;;;
;;;;;;
One of the most complicated pages in the app is currently storing an ASP.NET control collection to the Session. This is failing miserably because the controls cannot be serialized automatically.
;;;;;;
;;;;;;
Short of totally rewriting how the page works to prevent the need for storing the control collection in the Session, does anyone have a trick/solution for making the collection serializable?
;;;;;;
"
401541"",""No"",""The first answer that comes to mind is to do a partial rewrite (I don't think there's going to be an easy answer to this). If it's a small number of control types, write your own controls that inherit from those controls and also implement ISerializable. Then, using search and replace, replace the page's controls with your versions. If you are using a large number of control types, you might spend more time extending the standard types than you would refactoring the page.
";;;;;;
;;;;;;
The work is going to be in the serialization and deserialization of the controls when you initialize them, to make sure you're capturing what you need (the TextBox values, the IsSelected, etc.).
;;;;;;
;;;;;;
"This is obviously a hack, but if your priority really is not rewriting the functionality of that particuar page, this might work for you. Then, of course, you need to add this solution to the """"technical debt"""" that your application is accruing, to make sure it's always on someone's radar to refactor at some point.
";;;;;;
"
401541"",""No"",""Don't store control collections in session state. Tess has a lot of articles about this, for example this one.
";;;;;;
"
401541"",""No"",""Rewrite the page. You'll thank yourself later. There are sure to be other problems if the original """"programmer"""" (and I use that term loosely here) thought it was a good idea to store a control hierarchy in session.
";;;;;;
"
401541"",""No"",""""";;;;;;
"401541,""No"",""Serialize ASP.NET Control collection""";;;;;;
"410822,""No"",""I'm a big fan of ruby on rails, and it seems to incorporate many of the 'greatest hits' of web application programming techniques. Convention over configuration in particular is a big win to my mind.
";;;;;;
;;;;;;
"However I also have the feeling that some of the convenience I am getting is coming at the expense of technical debt that will need to be repaid down the road. It's not that I think ROR is quick and dirty, as I think it incorporates a lot of best practices and good default options in many cases. However, it seems to me that just doesn't cover some things yet (in particular there is little direct support for security in the framework, and plugins that I have seen are variable in quality).
";;;;;;
;;;;;;
I'm not looking for religious opinions or flamewars here, but I'd be interested to know the community's opinion on what areas Rails needs to improve on, and/or things that users of Rails need to watch out for on their own because the framework won't hold their hand and guide them to do the right thing.
;;;;;;
"
410822"",""No"",""With any level of abstraction there is a bit of a toll you pay - genericized methods aren't quite as fast as those specific to something built just for your purpose. Fortunately though, it's all right there for you to change. Don't like the query plans that come out of the dynamic find methods? write your own, good to go.
";;;;;;
;;;;;;
"Someone above put it well - hardware is cheaper than developers. I'd add """"at a sufficiently low amount of hardware""""
";;;;;;
"
410822"",""No"",""I'm reading Deploying Rails Applications and recommend it highly to answer your concerns.
";;;;;;
;;;;;;
The book is full of suggestions to make life easier, taking a deployment-aware approach to your Rails development from scratch, rather than leaving it to later.
;;;;;;
;;;;;;
I don't think the choice of RoR implies a technical debt but just reading the first few chapters alerted me to practices I should be following, particularly on shared hosts, such as freezing the core rails gems so you can't be disrupted by upgrades on the host.
;;;;;;
;;;;;;
The 30-page chapter on Shared Hosts includes memory quote tips such as using multiple accounts (if possible) with one Rails app per account. It also warns about popular libraries such as RMagick possibly pushing your memory size to the point where your processes are killed (such as a 100MB limit, which it suggests some hosts periodically apply).
;;;;;;
"
410822"",""No"",""From my experience, by far the biggest tolls you end up paying with RoR are:
";;;;;;
;;;;;;
;;;;;;
- Pretty big default stack (not counting plugins you might be using)
;;;;;;
- Updating models tends to be a pain in the ass, at least in production servers.
;;;;;;
- Updating Rails or Ruby themselves is a bit more complicated than it should, but this differs depending on your server setup.
;;;;;;
- As ewalshe mentioned, deployment is sometimes a drag, and further down the road, should you require it, scaling gets a bit iffy, as it does with most development frameworks.
;;;;;;
;;;;;;
;;;;;;
That being said, I'm an avid user of RoR for some projects, and with the actual state of hardware, even though you do end up paying some tech debt to using it, it's almost negligible. And one can hope these issues will be reviewed eventually and solved.
;;;;;;
"
410822"",""No"",""The article you refer to defines technical debt as
";;;;;;
;;;;;;
;;;;;;
[the] eventual consequences of;;;;;;
slapdash software architecture and;;;;;;
hasty software development
;;;;;;
;;;;;;
;;;;;;
With rails, any development that is not test driven incurs technical debt. But that is the case with any platform.
;;;;;;
;;;;;;
At an architectural level Rails provides some deployment challenges. A busy site must scale with lots of hardware or use intelligent caching strategies.
;;;;;;
;;;;;;
My advice to anyone adapting Rails would be to:
;;;;;;
;;;;;;
;;;;;;
- use TDD for all your development
;;;;;;
- verify the quality off any plugin;;;;;;
you use by reading its tests. If;;;;;;
they are not clear and complete,;;;;;;
avoid the plugin
;;;;;;
"- read """"Rails";;;;;;
"Recipes"""" and """"Advanced Rails";;;;;;
"Recipes"""" (Advanced Rails Recipes has";;;;;;
a good recipe for adding;;;;;;
authentication in a RESTful way)
;;;;;;
- be prepared to pay for hardware to scale your site (hardware is cheaper than development time)
;;;;;;
;;;;;;
"
410822"",""No"",""I love Rails too, but its important for us to understand the shortcomings of the framework that we use. Though it might be a broad topic addressing these issues wont hurt anyone.
";;;;;;
;;;;;;
Aside from security issues, one other big issue is DEPLOYMENT on Shared Hosts. PHP thrives in shared hosting environments but Rails is still lagging behind.
;;;;;;
;;;;;;
Of course most professional Rails developers know that their apps need fine-tuned servers for production and they will obviously deploy on Rails-Specific hosts.
;;;;;;
;;;;;;
In order for Rails to continue success the core team should address this issue, especially with Rails 3.0 (Merb +Rails) coming..
;;;;;;
;;;;;;
An example of this is simple: I have a bluehost account, and i noticed the Rails icon in my cpanel. I talked to the bluehost support and they said its more or less a dummy icon, and that it doesn't function properly.
;;;;;;
;;;;;;
Having said that any professional who wanted to deploy a Rails App would not use bluehost. , but it does hurt Rails, when hosts say that they support it and then users run into problems which their support know nothing about..
;;;;;;
"
410822"",""No"",""Regardless of framework the programmer needs to know what she's doing. I'd say that it's much easier to build a secure web application using something as mature, well designed and widely adapted as Ruby on Rails than going without the framework support.
";;;;;;
;;;;;;
Take care with plugins and find out how they work (know what you do, again).
;;;;;;
"
410822"",""No"",""""";;;;;;
"410822,""No"",""What (if any) technical debt am I incurring with Ruby on Rails?""";;;;;;
"431107,""No"",""We have started using Spring framework in my project. After becoming acquainted with the basic features (IoC) we have started using spring aop and spring security as well.
";;;;;;
;;;;;;
The problem is that we now have more than 8 different context files and I feel we didn't give enough thought for the organization of those files and their roles. New files were introduced as the project evolved.;;;;;;
We have different context files for: metadata, aop, authorization, services, web resources (it's a RESTful application). So when a developer wants to add a new bean it's not always clear in which file he should add it. We need methodology.
;;;;;;
;;;;;;
The question:
;;;;;;
;;;;;;
Is there a best practice for spring files organization?
;;;;;;
;;;;;;
Should the context files encapsulate layers (DAL , Business Logic, Web) or use cases ? or Flows?
;;;;;;
"
431107"",""No"",""Breaking the config into separate files is useful to me in terms of testing. On a small project, I'll put Spring Security config into """"securityContext.xml"""" and the rest of my beans into """"applicationContext.xml."""" Then while running integration tests, it's easy to enable or disable security simple by choosing whether to include securityContext.xml. This almost resembles AOP in a way, in that you add more functionality to the application by choosing whether to include particular files.
";;;;;;
"
431107"",""No"",""I would follow spring's recommendations and place the context files in META-INF/spring as described in the Spring Roo documentation. In general, I would recommend trying out roo and following their project structure and layout.
";;;;;;
;;;;;;
Example
;;;;;;
;;;;;;
src/;;;;;;
+-- main/;;;;;;
| +-- java/;;;;;;
| \-- resources/;;;;;;
| +-- META-INF/;;;;;;
| | \-- spring/ ‹ normal spring context files;;;;;;
| | +-- context.xml;;;;;;
| | \-- context-services.xml;;;;;;
| \-- other files;;;;;;
|;;;;;;
+-- test/;;;;;;
| +-- java/;;;;;;
| \-- resources/;;;;;;
| +-- META-INF/;;;;;;
| | \-- spring/ ‹ context files for testing;;;;;;
| | +-- context-test.xml;;;;;;
| | \-- context-dao-test.xml ;;;;;;
| \-- other files;;;;;;
|;;;;;;
\-- pom.xml;;;;;;
;;;;;;
;;;;;;
Spring XML vs annotations
;;;;;;
;;;;;;
"There are many good articles on the topic, but I would like to break up a common misconception, because both approaches have their merits: If you want to separate the configuration from the actual implementation, it is easier with XML, but you can achieve the same thing with annotations, as krosenvold said. However, when using XML configuration files, bean names are only required, if the bean has to be referenced directly. You can always use auto-wiring by name or by type.
";;;;;;
;;;;;;
The only important thing is that you should stay consistent throughout the project, or, where possible, across your company's projects.
;;;;;;
"
431107"",""No"",""I find that I break them out by layer.
";;;;;;
;;;;;;
When I write unit tests for each layer I override the production context with values pertinent for the tests.
;;;;;;
"
431107"",""No"",""yep - split on similar roles for the beans therein. As for annotations, I believe they """"may"""" have a small role to play, perhaps with transaction definitions, but otherwise they just forever bind your code and you might as well be adding spring (or any other 3rd party) references directly everywhere. For me annotations=shortcut and technical debt. They aren't externally configurable, so its not trivial to rewire, or unwire your code, and limits reuse. A given bean is forever stuck with its annotated dependencies and configuration so cant be used by multiple projects/processes simultaneously with different wiring and config.";;;;;;
just my 2 cents.
;;;;;;
"
431107"",""No"",""Spring context files contain definitions of beans, so I think that it is best to follow OO principle and structure them the same way you structure your classes in packages. We usually create packages to encapsulate a set of classes that work together to solve a specific problem. A package usually encapsulates a horizontal layer (database layer, middleware, business logic or part of them). There are occasions that a package contain classes that correspond to a horizontal layer (use case or flow as you've mentioned). In general I would recommend to create one context file for every package or set of packages. When you add a new bean, add it to the context file that corresponds to the package of the class.
";;;;;;
;;;;;;
Of course this shouldn't be a very strict rule, as there might be cases that it would beneficial to follow another practice.
;;;;;;
"
431107"",""No"",""Start with applicationContext.xml and separate when there's a lot of beans which have something in common.
";;;;;;
;;;;;;
To give you some idea of a possible setup, in the application I'm currently working on, here's what I have in server:
;;;;;;
;;;;;;
;;;;;;
- applicationContext.xml
;;;;;;
- securityContext.xml
;;;;;;
- schedulingContext.xml
;;;;;;
- dataSourcecontext.xml
;;;;;;
- spring-ws-servlet.xml (Spring Web Services related beans)
;;;;;;
;;;;;;
;;;;;;
For GUI clients, since this project has several, there is one folder with shared context files, and on top of that, each client has its own context folder. Shared context files:
;;;;;;
;;;;;;
;;;;;;
- sharedMainApplicationContext.xml
;;;;;;
- sharedGuiContext.xml
;;;;;;
- sharedSecurityContext.xml
;;;;;;
;;;;;;
;;;;;;
App-specific files:
;;;;;;
;;;;;;
;;;;;;
- mainApplicationContext.xml and
;;;;;;
- guiContext.xml and
;;;;;;
- commandsContext.xml (menu structure)
;;;;;;
- sharedBusinessLayerContext.xml (beans for connecting to server)
;;;;;;
;;;;;;
"
431107"",""No"",""If you're still reasonably early in the project I'd advice you strongly to look at annotation-driven configuration. After converting to annotations we only have 1 xml file with definitions and it's really quite small, and this is a large project. Annotation driven configuration puts focus on your implementation instead of the xml. It also more or less removes the fairly redundant abstraction layer which is the spring """"bean name"""". It turns out the bean name exists mostly because of xml (The bean name still exists in annotation config but is irrelevant in most cases). After doing this switch on a large project everyone's 100% in agreement that it's a lot better and we also have fairly decent evidence that it's a more productive environment.
";;;;;;
;;;;;;
I'd really recommend anyone who's using spring to switch to annotations. It's possible to mix them as well. If you need transitional advice I suppose it's easy to ask on SO ;)
;;;;;
"
431107"",""No"",""""";;;;;;
"431107,""No"",""Spring context files organization and best practices""";;;;;;
"1364273,""No"",""We have many Spring web applications to make on a WebLogic server and are curious about when WARs should go in an EAR and when they should just exist as WARs. Occassionally, the WARs will need to access common logic JARs, but I don't see why these would need to go into an EAR when they could just be packaged into the WARs.
";;;;;;
;;;;;;
From what I understand, if several WARs are in an EAR and you need to modify one of those WARs, you need to redeploy the entire EAR to update the server. This will cause all of the WARs to bounce. If they weren't in an EAR, however, I could just update the one WAR and it would be the only one to bounce.
;;;;;;
;;;;;;
What's wrong with having 100 different WAR files standing alone and using packaged JARs and shared libraries (using WebLogic)?
;;;;;;
;;;;;;
Thank you for any insight!
;;;;;;
"
1364273"",""No"",""If all you have is WAR files, then an EAR is of limited usefulness, serving only as a deployment container for your WARs. You can save a bit of bloat by sharing JARs between the WARs in this way, but that in itself is not hugely compelling.
";;;;;;
;;;;;;
EARs are essential, however, when dealing with full JavaEE/J2EE applications, which use WARs, EJBs, JMS, JCA resources, etc. The interactions and dependencies between the components of these sort of applications is vastly easier to manage in an EAR.
;;;;;;
;;;;;;
But if all you're using Weblogic for is a WAR container, then you might as well use a vanilla servlet container like Tomcat or Jetty, for all the functional use you get out of Weblogic.
;;;;;;
"
1364273"",""No"",""Nothing is actually wrong with just deploying wars, developers have an interest in getting tasks met quickly as possible. That means they often will take on technical debt, and if they are in a respectable team, they will clean that debt.
";;;;;;
;;;;;;
This however presents a problem, what happens when you avoid the complexity of EARs, and share a jar by adding it to the application server? Much more common in the war only team, is offloading all sorts of application complexity to the application server. Simply because it was easier to implement, in their often over-allocated schedule. I don't blame them for this at all, However now we have a new problem. A standard application server cannot be used, you must do system side customizations. Effectively the web application is bleeding all over the system. The person who maintains the Application server, now MUST also know application specific details... in an enterprise environment, this presents a very clear problem.
;;;;;;
;;;;;;
The developers can then take on system responsibility, but they still need to meet deadlines. They inevitably bleed all over the OS as well, and suddenly the developers are the only possible admins. If an admin doesn’t know what the application is using system side, they can very much cause major problems. These unclear lines always end in fingers pointing in both directions, unknown system states, and team isolation.
;;;;;;
;;;;;;
Do they have to use an EAR then? Nope, I'm a systems Engineer, so I always say they can deploy their own application server like another commercial application. Inside an RPM, if deploying a WAR is like other supported Application Servers, then they get the WAR deployment pipeline. If not, then RPM all in one... Once not allowing the team to externalize their costs, then EARs become a GREAT idea.
;;;;;;
"
1364273"",""No"",""Having multiple shared libraries rather should not be the compelling reason to go for an EAR, as a JAR (or set of JARs) can always be deployed as """"library"""" on weblogic which can, therefore, be shared by all the WARs. Isn' it right?
";;;;;;
"
1364273"",""No"",""The argument to package multiple WARs into an EAR can be compelling if you run into the situation that my last employer did, where you have a common set of library JARs that are used by multiple WARs, and the size of that collection of JARs is considerable. In our particular situation, the total size of 3 WARs with the common JARs packaged into each WAR totaled 124MB. By locating the JARs in the containing EAR and configuring the classpath of each WAR to use those JARs, the footprint of the EAR that contained the 3 WARs was reduced to 40MB. I'd consider that a compelling reason.
";;;;;;
"
1364273"",""No"",""I agree with almost all of skaffman's (typically) spot on comments.
";;;;;;
;;;;;;
If you're using Spring without EJBs you can stick with a WAR file, of course. No need for an EAR that I can see.
;;;;;;
;;;;;;
However, if your Spring app uses message-driven POJOs I can see where you'd still deploy a WAR file on WebLogic to take advantage of JMS.
;;;;;;
;;;;;;
An EAR might be necessary if you've got EJBs or JCA, but I wouldn't say that JMS mandates an EAR. I've used JMS and deployed a WAR file on WebLogic and it's worked just fine.
;;;;;;
;;;;;;
If you decide to go with Tomcat and deploy a WAR there, you can still keep JMS functionality if you use ActiveMQ.
;;;;;;
"
1364273"",""No"",""""";;;;;;
"1364273,""No"",""When is it appropriate to use an EAR and when should your apps be in WARs?""";;;;;;
"1472805,""No"",""I am looking into using Doctrine2 with my Zend Framework setup. I really like the datamapper pattern, mainly because it seperates my domain models with my database.
";;;;;;
;;;;;;
My question is what is the best practice for using Doctrine and DQL with my controllers?
;;;;;;
;;;;;;
;;;;;;
controllers uses Doctrine;;;;;;
DQL/EntityManager directly for;;;;;;
saving/loading my domain models?
;;;;;;
create my own classes in the;;;;;;
datamapper pattern for;;;;;;
saving/loading my domain models, and;;;;;;
then use Doctrine internally in;;;;;;
my own classes?
;;;;;;
;;;;;;
;;;;;;
The pros. for #1 is of course that I don't need to create my own datamapper models, but again, with #2 I can later replace Doctrine (in theory)
;;;;;;
;;;;;;
What would you do?
;;;;;;
"
1472805"",""No"",""In my experience writing persistence layer logic in the controller has always come back to haunt me in the form of technical debt. This seemingly small decision now will likely cause unavoidable and potentially large scale re-factoring in the future even on small projects. Ideally we would love to be able to copy and paste extremely thin controllers reconfigured with the appropriate models to prevent us from having to repeatedly create CRUD controllers over and over, but to allow also for customization of these controllers. In my opinion this can only be accomplished by being disciplined and keeping the persistence layer abstracted away from our application as much as possible. Since the overhead of doing that especially on a clean room controller is minimal, I can't see any good reason to advice you not to.
";;;;;;
"
1472805"",""No"",""My thoughts are similar to pix0r's response. However, if your project is small enough that you could use the EntityManager/DQL directly in controllers then Doctrine 2 is possibly overkill for your project and maybe you should consider a smaller/simpler system.
";;;;;;
"
1472805"",""No"",""Also consider Symfony in your research. It will let you use ORM(propel/doctrine) or you may write your own data model. You can also bypass the data relationship and use direct SQL if needed.
";;;;;;
;;;;;;
To address your concern on 1 or 2, I personally go with ORM in my projects and bypass it if need arises. And better yet with symfony, you may switch between doctrine and propel or your own classes if you ever need to.
;;;;;;
;;;;;;
Pros:;;;;;;
1) faster development. easier to maintain. generally more secure and consistent. easy to switch databases should you ever need to.(from MySQL to Oracle, etc). ;;;;;;
;;;;;;
2) Faster runtime. Less dependency.;;;;;;
;;;;;;
Cons:;;;;;;
1) Slower runtime and larger memory footprint. Dependency on other projects.;;;;;;
2) (reverse the pros for #1) ;;;;;;
;;;;;;
"
1472805"",""No"",""I second the advice from pix0r. The additional abstraction is only worth it if it is a larger project with a potentially long lifetime and maybe with many developers on it. This is pretty much the guideline I follow, too, be it in php with doctrine2 or in java with jpa (since doctrine2 heavily resembles jpa).
";;;;;;
;;;;;;
"If you want the additional abstraction, doctrine2 already ships with the possibility to use repositories (repositories are very similar or even equal to DAOs, maybe with a stronger focus on the business terms and logic). There is a base class Doctrine\ORM\EntityRepository. Whenever you call EntityManager#getRepository($entityName) Doctrine will look whether you configured a custom repository class for that entity. If not, it instantiates a Doctrine\ORM\EntityRepository. You can configure a custom repository class for an entity in the metadata, for example in docblock annotations: @Entity(..., repositoryClass=""""My\Project\Domain\UserRepository""""). Such a custom class should inherit from EntityRepository and call the parent constructor appropriately. The base class already contains some basic find* functionality.
";;;;;;
;;;;;;
Roman
;;;;;;
"
1472805"",""No"",""Regarding your abstraction question, I'd say it really depends on the lifetime of this project and how portable your code needs to be. If it's a one-off website that will need minimal maintenance, it would probably save you some time to forego the additional abstraction layer and just write Doctrine code in your controllers. However, if you're planning to reuse this code, move it to different platforms, or maintain it for a long period of time, I'd take the time to add that abstraction because it will give you a lot more flexibility.
";;;;;;
;;;;;;
"If you're still researching frameworks, take a look at Kohana. It's basically a lightweight rewrite of CodeIgniter written for PHP5.
";;;;;;
"
1472805"",""No"",""""";;;;;;
"1472805,""No"",""What is the best MVC, Doctrine2, Datamapper practice?""";;;;;;
"1828057,""No"",""While trying to apply agile principles to our development process, in particular scrum principles and XP-like user stories, we faced a problem about the architecture.
";;;;;;
;;;;;;
Maybe we are still too much linked to the architecture-centric development, however we are trying to maintain a strong component based development, mixed with the agile modeling principles. Our aim is to have a small design up front, prone to evolutions during the development.
;;;;;;
;;;;;;
What I'm looking for is something that could let me place into my backlog stories about my architecture and the components inside of it: development stories, not only usage stories.;;;;;;
System story could be a different kind of user story, which tells something that is not strictly related to business value, but that is instead linked to architecture and quality concerns of a system.
;;;;;;
;;;;;;
Edit:;;;;;;
"I found this research of the Aalborg University about """"developer stories"""".
";;;;;;
;;;;;;
Have you any experience, idea or opposition?
;;;;;;
;;;;;;
Thank you in advance! (this is my first question! :D)
;;;;;;
"
1828057"",""No"",""One lens that I find useful to take on developer stories is to think about who """"the user"""" for any given story is. Just because you're not writing a feature that will be seen by people outside your company doesn't mean that there isn't a user for that piece of work. You may be writing code for a team down the hall. In some cases, the user is yourself. This is often the case for developer stories. Think """"As a developer, I have a scalable architecture so that I can easily add new functionality."""" By calling out the particular user, it gives the product owner some insight into who will see the value of the story. And pointing out the """"why"""" is also helpful to convey what benefit the story hopes to achieve. As others have mentioned, the management of the backlog does come down to a negotiation between the product owner and the team. And as always, you need to work out what works best for your team, regardless of process dogma. Every team has a different situation, and ideas that work well for one team don't always translate to another.
";;;;;;
"
1828057"",""No"",""In our team we call it """"IT-cards"""", which is cards of the form: """"As a developer. I wan't to refactor the xyz-component. To reduce maintenance cost and increase flexibility.""""
";;;;;;
;;;;;;
"Team members are free to pick any IT-card they deem important instead of popping a """"Feature-card"""" from the prioritized backlog.
";;;;;;
;;;;;;
I find this approach to work reasonably well to keep technical debt at an acceptable level and allow a healthy pace of innovation.
;;;;;;
;;;;;;
I've found it somewhat lacking as a means of re-architecting the system though. It's hard to justify to long departures from the feature producing flow.
;;;;;;
;;;;;;
As I'm writing this I'm thinking that one could approach architecture by theming the stories. Identify the architectual goals with epics that you break down into a theme to focus on.
;;;;;;
"
1828057"",""No"",""It is as simple as putting a Make sure the Membership component can be Testd unplugged from all the other components 'user' story, your backlog SHOULD have system/development-stories, as long as it is sync'ed with the product owner's desire of such implementation.
";;;;;;
;;;;;;
"This is how we usually put the non-functional requirements in a backlog, like """"The domain model has to run on a different datacenter under load balancing"""" etc.
";;;;;;
"
1828057"",""No"",""My answer here applies.
";;;;;;
;;;;;;
There is a very challenging balance between doing architecture work and more feature specific work. Technically both are valid approaches and work, but the longer you delay some amount of usable product (sprint results) the larger the risk you take that you aren't building the right product (user requirements, performance requirements, ect.). As early as you can, get to a point where you can perform system level tests to prove your product works and you can demonstrate the value and direction of the product with your stake holders.
;;;;;;
"
1828057"",""No"",""IMO a backlog should not include developer stories. There is no way that any Product Owner can sensibly prioritise these alongside business functionality. And what happens if the Product Owner deems one of them unimportant and so pulls it out the backlog? If the team then insists on keeping the story, you are in a situation where ownership of the backlog becomes unclear.
";;;;;;
;;;;;;
However, I do definitely think that the team need to build architecture early on in the project. One problem on my project was that we focussed too heavily on functionality in the first few sprints.
;;;;;;
;;;;;;
"Let's think about """"architectural debt"""" (similar to technical debt) as time that needs to be spent building infrastructure and architecture. Unlike technical debt (which starts at zero and builds up as the team produces functionality without proper refactoring), a team starts with architectural debt and must work to reduce it. Time spent reducing architectural debt means that less time is spent producing functionality, i.e. a lower team velocity and reduced sprint output. In this way architectural debt is similar to technical debt. If requirements emerged that didn't fit the current architecture, then the level of architectural debt would increase.
";;;;;;
;;;;;;
Bear in mind, that the team should decide (and be able to justify to the Product Owner) how they are going to spend their time. And so they can split their effort between functionality, technical debt and architectural debt as they see fit.
;;;;;;
;;;;;;
"Architecture work should still be driven by functionality though. In other words, the team should build infrastructure to support and enable a particular user story. Not just because they think it will be useful in the future. The YAGNI principle applies to that sort of approach.
";;;;;
"
1828057"",""No"",""""";;;;;;
"1828057,""No"",""System stories for agile architecture""";;;;;;
"2489722,""No"",""I have a freelance web application project where the client requests new features every two weeks or so. I am unable to anticipate the requirements of upcoming features. So when the client requests a new feature, one of several things may happen:
";;;;;;
;;;;;;
;;;;;;
I implement the feature with ease;;;;;;
because it is compatible with the;;;;;;
existing platform
;;;;;;
I implement the feature with;;;;;;
difficulty because I have to rewrite;;;;;;
a significant portion of the;;;;;;
platform's foundation
;;;;;;
Client withdraws request because it;;;;;;
costs too much to implement against;;;;;;
existing platform
;;;;;;
;;;;;;
;;;;;;
At the beginning of the project, for about six months, all feature requests fell under category 1) because the system was small and agile. But for the past six months, most feature implementation fell under category 2). The system is mature, forcing me to refactor and test everytime I want to add new modules. Additionally, I find myself breaking things that use to work, and fixing it (I don't get paid for this).
;;;;;;
;;;;;;
"The client is starting to express frustration at the time and cost for me to implement new features. To them, many of the feature requests are of the same scale as the features they requested six months ago. For example, a client would ask, """"If it took you 1 week to build a ticketing system last year, why does it take you 1 month to build an event registration system today? An event registration system is much simpler than a ticketing system. It should only take you 1 week!"""" Because of this scenario, I fear feature requests will soon land in category 3). In fact, I'm already eating a lot of the cost myself because I volunteer many hours to support the project.
";;;;;;
;;;;;;
The client is often shocked when I tell him honestly the time it takes to do something. The client always compares my estimates against the early months of a project. I don't think they're prepared for what it really costs to develop, maintain and support a mature web application.
;;;;;;
;;;;;;
When working on a salary for a full time company, managers were more receptive of my estimates and even encouraged me to pad my numbers to prepare for the unexpected. Is there a way to condition my clients to think the same way?
;;;;;;
;;;;;;
Can anyone offer advice on how I can continue to work on this web project without eating too much of the cost myself?
;;;;;;
;;;;;;
Additional info - I've only been freelancing full time for 1 year. I don't yet have the high end clients, but I'm slowly getting there. I'm getting better quality clients as time goes by.
;;;;;;
"
2489722"",""No"",""";;;;;;
Can anyone offer advice on how I can;;;;;;
continue to work on this web project;;;;;;
without eating too much of the cost;;;;;;
myself?
;;;;;;
;;;;;;
;;;;;;
Transparency and communication are your best tools. If your clients can't understand why something that once took a week now takes three weeks, you need to be able to explain better. Depending on the client's area of expertise, you may be able to find a metaphor that resonates with them - trying to build a Prius on a Model T frame, say, or trying to write War and Peace with a typewriter with no vowels. Don't be ashamed of your honest estimates, and don't be bullied. And share with your customer as much as they can bear about your process and the obstacles you face - you may even find that they have some worthy suggestions.
;;;;;;
;;;;;;
With respect to the issue of technical debt - and I agree that this is the underlying problem - TDD will take you far, as will the frequent refactoring that broad test coverage permits. Think about what design would have permitted all your changes easily - and work toward that design, incrementally, with tests and refactoring. Maybe you have to eat the costs of that, because the functionality is all already paid for. But, looking forward, include costs for refactoring in your estimates - and don't think of it as padding. Padding is (arguably) dishonest; maintaining the design of your code to accommodate future changes is an honest requirement of your work.
;;;;;
"
2489722"",""No"",""Take a look at these two articles.
";;;;;;
"
2489722"",""No"",""Been doing the freelancing thing myself recently (different field tho), and I built into the contract two things"; a) If any major (in my opinion) additions / changes were to be made to the framework, each was counted as a separate project with separate delivery requirements and costings, b) that I would provide a suitable level of documentation so that if they wern't happy with my 'estimate', they could try someone else.
;;;;;
;;;;;;
I had one client try option b once; they came back fairly quickly.
;;;;;
"
2489722"",""No"",""It sounds to me like you've got some technical debt in your architecture"; it's brittle with respect to change. In addition, it's not clear that you're testing at the right time. The best time to write your tests is before you write your code, letting your tests function as an executable specification for your code.
;;;;;
;;;;;;
A robust architecture should facilitate change by encouraging decoupling between classes. This should limit the propagation of change as new features are added. It sounds as if you have more coupling than is healthy, but that's nearly impossible to tell without looking at the code. I'm just going by your description of the symptoms.
;;;;;;
;;;;;;
If this is the case, it might be worth investing some time in improving the underlying architecture. Be up front with your client that the underlying system no longer fits their requirements and that you need to take some time now so that future changes can be done faster and cheaper. It's possible that some of this is your fault -- if so, be honest about that, too. I don't think that it's unreasonable to expect the client to pick up the tab for changes to the architecture required to support their new features. If it's partially a result of inexperience, though, you may want to eat some of the cost yourself and chalk it up to a learning experience. You may want to do this anyway if you might otherwise lose the customer.
;;;;;;
"
2489722"",""No"",""""";;;;;;
"2489722,""No"",""How to make freelance clients understand the costs of developing and maintaining mature products?""";;;;;;
"2718864,""No"",""If we had a defined hierarchy in an application. For ex a 3 - tier architecture, how do we restrict subsequent developers from violating the norms?
";;;;;;
;;;;;;
For ex, in case of MVP (not asp.net MVC) architecture, the presenter should always bind the model and view. This helps in writing proper unit test programs. However, we had instances where people directly imported the model in view and called the functions violating the norms and hence the test cases couldn't be written properly.
;;;;;;
;;;;;;
Is there a way we can restrict which classes are allowed to inherit from a set of classes? I am looking at various possibilities, including adopting a different design pattern, however a new approach should be worth the code change involved.
;;;;;;
"
2718864"",""No"",""Just as soon as everything gets locked down according to your satisfaction, new requirements will arrive and you'll have to break through the side of it.
";;;;;;
;;;;;;
Enforcing such stringency at the programming level with .NET is almost impossible considering a programmer can access all private members through reflection.
;;;;;;
;;;;;;
Do yourself and favour and schedule regular code reviews, provide education and implement proper training. And, as you said, it will become quickly evident when you can't write unit tests against it.
;;;;;;
"
2718864"",""No"",""You are wanting to solve a people problem with software? Prepare for a world of pain!
";;;;;;
;;;;;;
The way to solve the problem is to make sure that you have ways of working with people that you don't end up with those kinds of problems.... Pair Programming / Review. Induction of people when they first come onto the project, etc.
;;;;;;
;;;;;;
Having said that, you can write tools that analyse the software and look for common problems. But people are pretty creative and can find all sorts of bizarre ways of doing things.
;;;;;;
"
2718864"",""No"",""It's been almost 3 years since I posted this question. I must say that I have tried exploring this despite the brilliant answers here. Some of the lessons I've learnt so far -
";;;;;;
;;;;;;
;;;;;;
More code smell come out by looking at the consumers (Unit tests are best place to look, if you have them).
;;;;;;
;;;;;;
;;;;;;
- Number of parameters in a constructor are a direct indication of number of dependencies. Too many dependencies => Class is doing too much.
;;;;;;
- Number of (public) methods in a class
;;;;;;
- Setup of unit tests will almost always give this away
;;;;;;
;;;;;;
Code deteriorates over time, unless there is a focused effort to clear technical debt, and refactoring. This is true irrespective of the language.
;;;;;;
Tools can help only to an extent. But a combination of tools and tests often give enough hints on various smells. It takes a bit of experience to catch them in a timely fashion, particularly to understand each smell's significance and impact.
;;;;;;
;;;;;;
"
2718864"",""No"",""I'm afraid this is not possible. We tried to achieve this with the help of attributes and we didn't succeed. You may want to refer to my past post on SO.
";;;;;;
;;;;;;
"The best you can do is keep checking your assemblies with NDepend. NDepend shows you dependancy diagram of assemblies in your project and you can immediately track the violations and take actions reactively.
";;;;;;
;;;;;;
"![]()
";;;;;;
"(source: ndepend.com)
";;;;;;
"
2718864"",""No"",""""";;;;;;
"2718864,""No"",""Restrict violation of architecture - asp.net MVP""";;;;;;
"2745373,""No"",""In a multi-tenant ASP.NET MVC application based on Rob Conery's MVC Storefront, should I be filtering the tenant's data in the repository or the service layer?
";;;;;;
;;;;;;
1. Filter tenant's data in the repository:
;;;;;;
;;;;;;
public interface IJobRepository;;;;;;
{;;;;;;
IQueryable<Job> GetJobs(short tenantId);;;;
};;;;;;
;;;;;;
;;;;;;
2. Let the service filter the repository data by tenant:
;;;;;;
;;;;;;
public interface IJobService;;;;;;
{;;;;;;
IList<Job> GetJobs(short tenantId);;;;
};;;;;;
;;;;;;
;;;;;;
"My gut-feeling says to do it in the service layer (option 2), but it could be argued that each tenant should in essence have their own """"virtual repository,"""" (option 1) where this responsibility lies with the repository.
";;;;;;
;;;;;;
;;;;;;
- Which is the most elegant approach: option 1, option 2 or is there a better way?
;;;;;;
;;;;;;
;;;;;;
;;;;;;
;;;;;;
Update:
;;;;;;
;;;;;;
I tried the proposed idea of filtering at the repository, but the problem is that my application provides the tenant context (via sub-domain) and only interacts with the service layer. Passing the context all the way to the repository layer is a mission.
;;;;;;
;;;;;;
So instead I have opted to filter my data at the service layer. I feel that the repository should represent all data physically available in the repository with appropriate filters for retrieving tenant-specific data, to be used by the service layer.
;;;;;;
;;;;;;
Final Update:
;;;;;;
;;;;;;
I ended up abandoning this approach due to the unnecessary complexities. See my answer below.
;;;;;;
"
2745373"",""No"",""Update: Not going with a multi-tenant approach cost me hundreds of hours in technical debt. Four years down the line, I wish I took the time to implement a clean tenant approach first. Don't make the same mistake!
";;;;;;
;;;;;;
;;;;;;
;;;;;;
Old, out-dated answer:
;;;;;;
;;;;;;
I ended up stripping out all multi-tenant code in favour of using separate applications and databases for each tenant. In my case I have few tenants that do not change often, so I can do this.
;;;;;;
;;;;;;
All my controllers, membership providers, role providers, services and repositories were gravitating toward duplicate .WithTenantID(...) code all over the place, which made me realize that I didn't really need one Users table to access data that is specific to one tenant 99% of the time, so using separate applications just makes more sense and makes everything so much simpler.
;;;;;;
;;;;;;
Thanks for your answers - they made me realize that I needed a redesign.
;;;;;;
"
2745373"",""No"",""@FreshCode, we do it in the repository, and we do not pass the tenant as a parameter. We use the following approach:
";;;;;;
;;;;;;
public IQueryable<Job> GetJobs();;;;
{;;;;;;
return _db.Jobs.Where(j=>j.TenantId == Context.TenantId);;;;;
};;;;;;
;;;;;;
;;;;;;
The context is a dependency the repository has and that is created in the BeginRequest where you determine the tenant based on the url for example.;;;;;;
I think in this way it's pretty transparent and you can avoid the tenantId parameter which may become a little bit disturbing.
;;;;;;
;;;;;;
Regards.
;;;;;;
"
2745373"",""No"",""""";;;;;;
"2745373,""No"",""Multi-tenant Access Control: Repository or Service layer?""";;;;;;
"4398584,""No"",""I want to manage Sessions with client apps of my Restful WCF Service. Client app can be a J2me application or a .NET application.
";;;;;;
;;;;;;
What is the recommended way of maintaining sessions in RESTFUL WCF service?
;;;;;;
;;;;;;
Idea is to recognize that the request is coming from an already authenticated client.
;;;;;;
"
4398584"",""No"",""REST defines that the interaction is stateless, no client state is maintained on the server so you are looking to move away from a RESTful interface.
";;;;;;
;;;;;;
I cannot imagine a situation where you would want to maintain client state on a server that's providing WCF services. I think you need to look at your architecture as you are possibly about to cause yourself a lot of technical debt.
;;;;;;
"
4398584"",""No"",""This question may be useful to you: Best Practices for securing a REST API / web service
";;;;;;
;;;;;;
I think they restful thing to do here is to send the user credentials on each request if you can do that in a way that is transparent to the user and doesn't compromise the credentials. If you can't do that, cookies for the sole purpose of maintaining client identity have become a common concession among developers of restful services. Just don't go storing anything else with the cookie.
;;;;;;
"
4398584"",""No"",""""";;;;;;
"4398584,""No"",""How to Manage Sessions in Restful WCF Service""";;;;;;
"4745301,""No"",""I'm looking for way to present equally sized elements in a fixed number of rows and any number of columns. (Think of iTunes' or Picasa's album view. I believe some platforms refer to this as a 'gridview')
";;;;;;
;;;;;;
A WrapPanel would do the job, but I'm binding against a very large collection of objects, so I need virtualization.
;;;;;;
;;;;;;
I've been looking around the web, and found both commercially available VirtualizationWrapPanels and blog posts on how to implement your own VirtualizationPanel, but I can't seem to find any simpler solutions.
;;;;;;
;;;;;;
Is it possible to arrange virtualized databound items in a grid-style view (fixed number of rows) with standard WPF components?
;;;;;;
"
4745301"",""No"",""It is the responsibility of the Panel to provide Virtualization. Unfortunately the framework only provides a virtualizing StackPanel:
";;;;;;
;;;;;;
"http://msdn.microsoft.com/en-us/library/system.windows.controls.virtualizingpanel.aspx
";;;;;;
;;;;;;
There is a very good blog post that provides a virtualizing WrapPanel here:
;;;;;;
;;;;;;
"https://blogs.claritycon.com/custom-panels-in-silverlight-wpf-part-4-virtualization-7f3bded02587
";;;;;;
;;;;;;
Another alternative is to use a DataGrid, this will virtualize for you.
;;;;;;
"
4745301"",""No"",""A quick-and-dirty solution is to use a list (in your case a horizontal one) of """"grouping items"""" (in your case vertical ones) which will determine desired number of rows. Virtualization will occur on the """"groupers"""".
";;;;;;
"
4745301"",""No"",""I've recently had to have a hunt round for similar functionality and struggled to find anything that was production ready.
";;;;;;
;;;;;;
I found a series or articles and sample code that contain a Virtualizing Tile Panel
;;;;;;
;;;;;;
"http://blogs.msdn.com/b/dancre/archive/tags/virtualizingtilepanel/
";;;;;;
;;;;;;
I've been using it and it has been fairly stable. There were some changes that needed to be made though. We had to add some of the keyboard control into the panel as it wasn't implemented, tabbing needed to be changed as well as adjusting tile sizes, etc. It's a good starting point if do you decide to roll your own.
;;;;;;
;;;;;;
One major caveat though was that it also MUST have a parent that is constrained to a limited size else it errors out. This is not normally an issue as you will want it to be limited in size so you can enable scrolling. There may be a solution to this particular problem but we didn't have time to investigate. We just raised it as technical debt as it doesn't actually affect us in its current form.
;;;;;;
"
4745301"",""No"",""""";;;;;;
"4745301,""No"",""WPF arrange items in a grid with virtualization""";;;;;;
"5834540,""No"",""I was doing a project that requires frequent database access, insertions and deletions. Should I go for Raw SQL commands or should I prefer to go with an ORM technique? The project can work fine without any objects and using only SQL commands? Does this affect scalability in general?
";;;;;;
;;;;;;
EDIT: The project is one of the types where the user isn't provided with my content, but the user generates content, and the project is online. So, the amount of content depends upon the number of users, and if the project has even 50000 users, and additionally every user can create content or read content, then what would be the most apt approach?
;;;;;;
"
5834540"",""No"",""It depends a bit on timescale and your current knowledge of MySQL and ORM systems. If you don't have much time, just do whatever you know best, rather than wasting time learning a whole new set of code.
";;;;;;
;;;;;;
With more time, an ORM system like Doctrine or Propel can massively improve your development speed. When the schema is still changing a lot, you don't want to be spending a lot of time just rewriting queries. With an ORM system, it can be as simple as changing the schema file and clearing the cache.
;;;;;;
;;;;;;
Then when the design settles down, keep an eye on performance. If you do use ORM and your code is solid OOP, it's not too big an issue to migrate to SQL one query at a time.
;;;;;;
;;;;;;
That's the great thing about coding with OOP - a decision like this doesn't have to bind you forever.
;;;;;;
"
5834540"",""No"",""If the project is either oriented : ";;;;;;
#NOME?;;;;;;
- performance (as in designing the fastest algorithm to do a simple task)
;;;;;;
;;;;;;
Then you could go with direct sql commands in your code.
;;;;;;
;;;;;;
The thing you don't want to do, is do this if this is a large software, where you end up with many classes, and lot's of code. If you are in this case, and you scatter sql everywhere in your code, you will clearly regret it someday. You will have a hard time making changes to your domain model. Any modification would become really hard (except for adding functionalities or entites independant with the existing ones).
;;;;;;
;;;;;;
More information would be good, though, as : ;;;;;;
- What do you mean by frequent (how frequent) ?;;;;;;
- What performance do you need ?
;;;;;;
;;;;;;
EDIT
;;;;;;
;;;;;;
It seems you're making some sort of CMS service. My bet is you don't want to start stuffing your code with SQL. @teresko's pattern suggestion seems interesting, seperating your application logic from the DB (which is always good), but giving the possiblity to customize every queries. Nonetheless, adding a layer that fills in memory objects can take more time than simply using the database result to write your page, but I don't think that small difference should matter in your case.
;;;;;;
;;;;;;
I'd suggest to choose a good pattern that seperates your business logique and dataAccess, like what @terekso suggested.
;;;;;;
"
5834540"",""No"",""For speed of development, I would go with an ORM, in particular if most data access is CRUD.
";;;;;;
;;;;;;
This way you don't have to also develop the SQL and write data access routines.
;;;;;;
;;;;;;
Scalability should't suffer, though you do need to understand what you are doing (you could hurt scalability with raw SQL as well).
;;;;;;
"
5834540"",""No"",""I'd say it's better to try to achieve the objective in the most simple way possible.";;;;;;
If using an ORM has no real added advantage, and the application is fairly simple, I would not use an ORM.;;;;;;
If the application is really about processing large sets of data, and there is no business logic, I would not use an ORM.
;;;;;;
;;;;;;
That doesn't mean that you shouldn't design your application property though, but again: if using an ORM doesn't give you any benefit, then why should you use it ?
;;;;;;
"
5834540"",""No"",""If you have no ( or limited ) experience with ORM, then it will take time to learn new API. Plus, you have to keep in mind, that the sacrifice the speed for 'magic'. For example, most ORMs will select wildcard '*' for fields, even when you just need list of titles from your Articles table.
";;;;;;
;;;;;;
And ORMs will aways fail in niche cases.
;;;;;;
;;;;;;
Most of ORMs out there ( the ones based on ActiveRecord pattern ) are extremely flawed from OOP's point of view. They create a tight coupling between your database structure and class/model.
;;;;;;
;;;;;;
"You can think of ORMs as technical debt. It will make the start of project easier. But, as the code grows more complex, you will begin to encounter more and more problems caused by limitations in ORM's API. Eventually, you will have situations, when it is impossible to to do something with ORM and you will have to start writing SQL fragments and entires statements directly.
";;;;;;
;;;;;;
"I would suggest to stay away from ORMs and implement a DataMapper pattern in your code. This will give you separation between your Domain Objects and the Database Access Layer.
";;;;;;
"
5834540"",""No"",""I would always recommend using some form of ORM for your data access layer, as there has been a lot of time invested into the security aspect. That alone is a reason to not roll your own, unless you feel confident about your skills in protecting against SQL injection and other vulnerabilities.
";;;;;;
"
5834540"",""No"",""""";;;;;;
"5834540,""No"",""Raw SQL vs OOP based queries (ORM)?""";;;;;;
"7765070,""No"",""I would like to know, can Redbean ORM be used for performance oriented scenarios like social networking web apps, and is it stable even if thousands of data is pulled by multiple users at same time? Also I'd like to know whether Redbean consumes more memory space?
";;;;;;
;;;;;;
Can anyone offer a comparison study of Doctrine-Propel-Redbean?
;;;;;;
"
7765070"",""No"",""";;;;;;
" @tereško if tis possible, can you give the pros and cons of orm with respect to pure sql according to your experience and also i will google the topic at same time. – Jaison Justus
";;;;;;
;;;;;;
;;;;;;
Well .. explaining this in 600 characters would be hard.
;;;;;;
;;;;;;
One thing I must clarify: this is about ORMs in PHP, though i am pretty sure it applies to some Ruby ORMs too and maybe others.
;;;;;;
;;;;;;
"In brief, you should avoid them, but if you have to use an ORM, then you will be better of with Doctrine 2.x , it's the lesser evil. (Implements something similar to DataMapper instead of ActiveRecord).
";;;;;;
;;;;;;
Case against ORMs
;;;;;;
;;;;;;
The main reason why some developers like to use ORMs is also the worst thing about them: it is easy to do simple thing in ORM, with very minor performance costs. This is perfectly fine.
;;;;;;
;;;;;;
1. Exponential complexity
;;;;;;
;;;;;;
"The problem originates in people to same tool for everything. If all you have is a hammer (..) type of issue. This results in creating a technical debt.
";;;;;;
;;;;;;
"At first it is easy to write new DB related code. And maybe, because you have a large project, management in first weeks (because later it would case additional issues - read The Mythical Man-Month, if interested in details) decides to hire more people. And you end up preferring people with ORM skills over general SQL.
";;;;;;
;;;;;;
But, as project progresses, you will begin to use ORM for solving increasingly complex problems. You will start to hack around some limitations and eventually you may end up with problems which just cannot be solved even with all the ORM hacks you know ... and now you do not have the SQL experts, because you did not hire them.
;;;;;;
;;;;;;
"Additionally most of popular ORMs are implementing ActiveRecord, which means that your application's business logic is directly coupled to ORM. And adding new features will take more and more time because of that coupling. And for the same reason, it is extremely hard to write good unit-tests for them.
";;;;;;
;;;;;;
2. Performance
;;;;;;
;;;;;;
I already mentioned that even simple uses of ORM (working with single table, no JOIN) have some performance costs. It is due to the fact that they use wildcard * for selecting data. When you need just the list of article IDs and titles, there is no point on fetching the content.
;;;;;;
;;;;;;
ORMs are really bad at working with multiple tables, when you need data based on multiple conditions. Consider the problem:
;;;;;;
;;;;;;
;;;;;;
Database contains 4 tables: Projects, Presentations, Slides and Bulletpoints.
;;;;;;
;;;;;;
;;;;;;
- Projects have many Presentations
;;;;;;
- Presentations have many Slides
;;;;;;
- Slides have many Bulletpoitns
;;;;;;
;;;;;;
;;;;;;
" And you need to find content from all the Bulletpoints in the Slides tagged as """"important"""" from 4 latest Presentations related to the Projects with ids 2, 4 and 8.
";;;;;;
;;;;;;
;;;;;;
This is a simple JOIN to write in pure SQL, but in any ORM implementation, that i have seen, this will result in 3-level nested loop, with queries at every level.
;;;;;;
;;;;;;
;;;;;;
;;;;;;
P.S. there are other reasons and side-effects, but they are relatively minor .. cannot remember any other important issues right now.
;;;;;;
"
7765070"",""No"",""I differ from @tereško here - ORMs can make database queries easier to write and easier to maintain. There is some great work going into Propel and Doctrine, in my opinion - take advantage of them! There are a number of performance comparisons on the web, and check out NotORM as well (I've not used it but they do some comparisons to Doctrine, if I recall correctly).
";;;;;;
;;;;;;
If you get to a point where your throughput requires you to do raw SQL then optimise at that point. But in terms of reducing your bug count and increasing your productivity, I think that your savings will fund a better server anyway. Of course, your mileage may vary.
;;;;;;
;;;;;;
I don't know RedBean, incidentally, but I am mildly of the view that Propel is faster than Doctrine in most cases, since the classes are pre-generated. I used Propel when it was the only option and have stuck with it, though I certainly wouldn't be averse to using Doctrine.
;;;;;;
;;;;;;
2018 update
;;;;;;
;;;;;;
Propel 2 is still in alpha after a number of years, and is in need of a number of large refactoring projects, which sadly were not getting done. Although the maintainers say that this alpha is good to use in production, since it has good test coverage, they have now started on Propel 3. Unfortunately, this has not actually had any releases yet, at the time of my writing this, despite the repository being a year old.
;;;;;;
;;;;;;
While I think Propel was a great project, I wonder if it is best to use something else for the time being. It could yet rise from the ashes!
;;;;;;
"
7765070"",""No"",""I feel Tereško's answer is not quite right.
";;;;;;
;;;;;;
Firstly it does not address the original question. It's indeed a case against ORMs, and I agree with the problems described in his answer. That's why I wrote RedBeanPHP. Just because most ORMs fail to make your life a bit easier does not mean the concept of an object relational mapping system is flawed. Most ORMs try to hide SQL, which is why JOINs get so complex; they need to re-invent something similar in an object oriented environment. This is where RedBeanPHP differs, as it does not hide SQL. It creates readable, valid SQL tables that are easy to query. Instead of a fabricated query language RedBeanPHP uses plain old SQL for record and bean retrieval. In short; RedBeanPHP works with SQL rather than against it. This makes it a lot less complex.
;;;;
;;;;;;
And yes, the performance of RedBeanPHP is good. How can I be so sure? Because unlike other ORMs, RedBeanPHP distinguishes between development mode and production mode. During the development cycle the database is fluid; you can add entries and they will be added dynamically. RedBeanPHP creates the columns, indexes, guesses the data types etc. It even stretches up columns if you need more bytes (higher data type) after a while. This makes RedBeanPHP extremely slow, but only during development time when speed should not be an issue. Once you are done developing you use freeze the database with a single mode specifier R::freeze() and no more checks are done. What you are left with is a pretty straight forward database layer on your production server. And because not much is done, performance is good.
;;;;;
;;;;;;
"Yes, I know, I am the author of RedBeanPHP so I am biased. However I felt like my ORM was being viewed in the same light as the other ORMs, which prompted me to write this. If you want to know more, feel free to consult the RedBeanPHP website, and here is a discussion on performance.
";;;;
;;;;;;
At our company we use RedBeanPHP for embedded systems as well as financial business systems, so it seems to scale rather well.
;;;;;;
;;;;;;
Together, me and the RedBeanPHP community are sincerely trying to make the ORM world a better place;" you can read the mission statement here.
";;;;;
;;;;;;
Good luck with your project and I hope you find the technical solution you are looking for.
;;;;;;
"
7765070"",""No"",""I would go with """"Horses for Courses"""" situation that utilizes a mix and match of both the worlds. I have built few large scale applications using RedBean, so my comment will focus purely on RedBean and not on other ORMs.
";;;;;;
;;;;;;
IS RedBean ORM SLOW?
;;;;;;
;;;;;;
Well, it depends on how you use it. In certain scenarios, it's faster than traditional query because RedBean cache the result for few seconds. Reusing the query will produce result faster. Have a look at the log using R::debug(true); It always shows
;;;;;
;;;;;;
"""""SELECT * FROM `table` -- keep-cache""""";;;;;;
;;;;;;
;;;;;;
Scenario 1: Fetching All (*)
;;;;;;
;;;;;;
In RedBean if you query
;;;;;;
;;;;;;
$result = R::findOne('table', ' id = ?', array($id));;;;;;
;;;;;;
;;;;;;
This is represented as
;;;;;;
;;;;;;
"$result= mysql_query(""""Select * from TABLE where id ="""".$id)";;;;;;
;;;;;;
;;;;;;
You may argue that if the table is having multiple columns why should you query (*).
;;;;;;
;;;;;;
Scenario 2: Single column
;;;;;;
;;;;;;
Fetching a single column
;;;;;;
;;;;;;
R::getCol( 'SELECT first_name FROM accounts' );;;;;;
;;;;;;
;;;;;;
"Like i mentioned """"Horses for Courses"""", developers should not simply rely on FindOne, FindAll, FindFirst, FindLast but also carefully draft what they really need.
";;;;;;
;;;;;;
Scenario 3: Caching
;;;;;;
;;;;;;
When you don't need caching, you can disable at application level which isn't an ideal situation
;;;;;;
;;;;;;
R::$writer->setUseCache(true);
;;;;
;;;;;;
"RedBean suggests that if you don't want to disable caching at the application level you should use traditional query with no-cache parameter like $result = R::exec(""""SELECT SQL_NO_CACHE * FROM TABLE"""")";
;;;;;
;;;;;;
This perfectly solves the problem of fetching real-time data from table by completely discarding query cache.
;;;;;;
;;;;;;
Scenario 4: Rapid Development
;;;;;;
;;;;;;
Using ORM makes your application development really fast, developers can code using ORM 2-3x faster than writing SQL.
;;;;;;
;;;;;;
Scenario 5: Complex Queries & Relationships
;;;;;
;;;;;;
RedBean presents a really nice way of implementing complex queries and one-to-many or many-to-many relationships
;;;;;;
;;;;;;
Plain SQL for complex queries
;;;;;;
;;;;;;
$books = R::getAll( 'SELECT ;;;;;;
book.title AS title, ;;;;;;
author.name AS author, ;;;;;;
GROUP_CONCAT(category.name) AS categories FROM book;;;;;;
JOIN author ON author.id = book.author_id;;;;;;
LEFT JOIN book_category ON book_category.book_id = book.id;;;;;;
LEFT JOIN category ON book_category.category_id = category.id ;;;;;;
GROUP BY book.id;;;;;;
' );;;;;;
foreach( $books as $book ) {;;;;;;
echo $book['title'];;;;;;
echo $book['author'];;;;;;
echo $book['categories'];;;;;;
};;;;;;
;;;;;;
;;;;;;
OR RedBean way of handling many-t-to-many relationships
;;;;;;
;;;;;;
list($vase, $lamp) = R::dispense('product', 2);;;;;;
;;;;;;
$tag = R::dispense( 'tag' );;;;;;
$tag->name = 'Art Deco';;;;;
;;;;;;
//creates product_tag table!;;;;;;
$vase->sharedTagList[] = $tag;;;;;
$lamp->sharedTagList[] = $tag;;;;;
R::storeAll( [$vase, $lamp] );;;;;;
;;;;;;
;;;;;;
Performance Issues
;;;;;;
;;;;;;
The arguments like ORMs are typically slow, consumes more memory and tends to make an application slow. I think they are not talking about RedBean.
;;;;;;
;;;;;;
We have Testd it with MySQL and Postgres both, trust me performance was never a bottleneck.
;;;;;;
;;;;;;
There is no denying that ORMs adds up little overhead and tend to make your application slower ( just a little ). Using ORM is primarily a trade-off between developer time and slightly slower runtime performance. My strategy is to first build the application end-to-end with the ORM then based on test cases, tweak the speed critical modules to use straight data access.
;;;;;;
"
7765070"",""No"",""""";;;;;;
"7765070,""No"",""RedBean ORM performance""";;;;;;
"7803282,""No"",""There is a lot of good content on SO about MVC and getting started with MVC, but I'm having trouble finding anything about how best to implement MVC structure on a pre-existing, live website.
";;;;;;
;;;;;;
My site is a nasty mishmash of echos and concatenated HTML that would make any professional programmer throw-up, but it works.
;;;;;;
;;;;;;
I'd like to spend some time tackling the mounting technical debt, however, and that means moving to a much more sane MVC structure.
;;;;;;
;;;;;;
If at all possible, I'd like to avoid a let 'er rip! 100% rewrite and launch approach, and instead take it a section at a time. But it seems the basic controller's centralized structure is not suitable for such an approach?
;;;;;;
"
7803282"",""No"",""it is possible to do. you can write your mod rewrites to only redirect to boot.php or whatever if no actual file is found at the requested path. this would allow you to do a section at a time. making sure all of your links are in order will be a nightmare however.
";;;;;;
;;;;;;
may wan to do a rewrite, and copy and paste peices you need out of the old application as you go.
;;;;;;
"
7803282"",""No"",""i agree with the other suggestions here, a framework isn't going to be a magic fix.
";;;;;;
;;;;;;
however it can help in the long run. ;;;;;;
i have converted a number of mishmash sites to kohana framework now, and have had the following experiences.
;;;;;;
;;;;;;
initially i didn't know kohana well enough, so i was learning that while recoding mysite. I ended up stopping the rewrite and coding a completely new project from scratch to learn kohana, then went back to the rewrite project, now that i understood the framework better.
;;;;;;
;;;;;;
if you don't understand the framework, it is going to be a steep learning curve trying to use it to convert an old project
;;;;;;
;;;;;;
;;;;;;
first step in the rewrite was to pull all the business/database logic embedded into the pages up to the top of each page (prior to the html output). So that i wasn't changing the flow/structure of the website, just separating business logic from display logic.
;;;;;;
;;;;;;
After that i had a site that had easily readable business logic, just in the old structure, and i had familiarised myself with the old codebase at the same time.
;;;;;;
next step i did was to fix any database structure issues so that everything was in 3rd normal form (if possible).
;;;;;;
;;;;;;
i found it easier to modify the old code to the new database structure, then to work around and old database structure in the new framework. (kohana is largely a convention based framework, rather then configuration, so it was nice to follow those conventions, to ease long term maintenance)
;;;;;;
;;;;;;
having a good database structure makes life easier regardless of framework
;;;;;;
next step was to pick a part of the website to replace. set up the routes in kohana and let kohana serve that part of the project. kohana (and other frameworks no doubt) have a fallback, if a file being requested via a url already exists on the site, then kohana won't handle that request
;;;;;;
;;;;;;
since you have separated the business logic from the display logic in your php files, it is a simple matter of splitting the code into a controller and a view. make the changes to both parts to suit the framework. you can split the business logic into model/controller after you have the controller/view working as expected
;;;;;;
;;;;;;
;;;;;;
work your way through that part of the site, until complete. then test/launch/bugfix etc
;;;;;;
;;;;;;
then start again on the next part of the site.
;;;;;;
;;;;;;
eventually you will get there...
;;;;;;
;;;;;;
although it took a lot of time to rewrite, for me it was worthwhile, as the sites are far easier to maintain now. (obviously the amount of gain will be dependant on the quality of the original codebase)
;;;;;;
;;;;;;
good luck
;;;;;;
"
7803282"",""No"",""If i understand what is the overall level of quality for that codebase , then there is no way to move to MVC in one step. It is just impossible. Another bad news is that frameworks will not help. They cannot magically transform a bad codebase into something resembling MVCish architecture.
";;;;;;
;;;;;;
"Instead you should focus on incremental refactoring. Your goal should be code that is mostly following SOLID principles and LoD. And while you refactor your code , the architecture will emerge by itself. MVC has many variants and flavors.
";;;;;;
;;;;;;
"One definite thing you might want to look at are the ways of using templates in php. Examine the code, and see what you has to change to fit your needs (it is more of a direction, not a complete solution). And remember that in MVC-like structures View is not a template, but it View uses multiple templates.
";;;;;;
;;;;;;
"Another thing you might benefit from is learning more about datamappers. Implementing them would be a good step in direction of creating model layer.
";;;;;;
;;;;;;
Oh .. and then there are few general lectures you could take a look at ( all are 30min+ ):
;;;;;;
;;;;;;
;;;;;;
;;;;;;
"Oh , and this book has some insights into refactoring large php projects. Could be useful for you.
";;;;;;
"
7803282"",""No"",""""";;;;;;
"7803282,""No"",""Strategies for migrating live site to MVC structure?""";;;;;;
"10283592,""No"",""I'm writing my own MVC framework in PHP, just for learning purposes. It wasn't really hard to have a router/dispatcher class to call the right controller/action etc.
";;;;;;
;;;;;;
But now i'm at the part where i'm going to use models. Or actually, the model layer. But there's something that confuses me.
;;;;;;
;;;;;;
"Alot of other MVC frameworks have a 'BaseModel'. I've read that this is actually bad practise, because the """"Model"""" shouldn't be seen as another class. But as a real 'layer', which can contain things like the 'mapper' pattern or 'repository' pattern etc.
";;;;;;
;;;;;;
But to be honest, i don't see any advantages in that. To me, a 'BaseModel' class seems to be the fastest way to go, with the same results.
;;;;;;
;;;;;;
I can simply do something like:
;;;;;;
;;;;;;
class User extends BaseModel;;;;;;
{;;;;;;
// the GetUserBy* could easily be something that's handled by the;;;;;;
// BaseModel class, like in the Repo pattern.;;;;;;
;;;;;;
public function getUserByName ( $name );;;;;;
{;;;;;;
// no error handling of any kind, just for simplicity;;;;;;
return $this->db->"exec(""""SELECT * FROM users WHERE name='"""".$name.""""'"""")";;;;
};;;;;;
;;;;;;
// $data = array;;;;;;
public function saveUser ( $data );;;;;;
{;;;;;;
// Make sure no extra fields are added to the array;;;;;;
$user = array ( 'name' => $data['name'],;;;;;
'address' => $data['address']);;;;;
;;;;;;
$this->db->autoSave ( $user );;;;
};;;;;;
};;;;;;
;;;;;;
;;;;;;
But if i'd go for a repository pattern then i have to create the following:;;;;;;
Repositories;;;;;;
Entities;;;;;;
DAO
;;;;;;
;;;;;;
Entities have Aggregates to other repositories. So basically i'm manually writing out my entire database scheme to objects...
;;;;;;
;;;;;;
In the end, what's the difference??? Except that i probably could have saved alot of time by simply using a BaseModel class...
;;;;;;
;;;;;;
But why is it still considered to be a bad thing then?? It's not that the repo pattern decouples my application more then i'm doing now. Because to me, those patterns mentioned above seem to be highly overrated. It probably would only work in an application that has a shared state; Save objects locally (in the repository) and commit them later on.
;;;;;
;;;;;;
That's why i think no one can really answer this...
;;;;;;
;;;;;;
"But i'm still hoping to see a decent answer that makes me go: """"ahhhh... What was i thinking...."""". But if not, then i'm sure about my case that the BaseModel isn't a bad thing at all and that most bloggers are just a bunch of sheeps :-)
";;;;;;
"
10283592"",""No"",""I wouldn't use a base model at all as most of your models won't have a uniform interface that they should conform to. You could create a base class for a number of different types of models (which is just basic object oriented programming) if you wanted to.
";;;;;;
;;;;;;
"Overall, I believe the models are supposed to be the """"loose"""" components that you wire together with a controller and display with one or more views. They can't really have a uniform interface because they all would do different things: some may not even talk to a persistence store, some may not be persistent, and/or some may be composed of other models.
";;;;;;
"
10283592"",""No"",""The Model class should be abstract, only defining methods without implementing them. As for your User model, you can easily write an interface GettingUsers which would have all those functions, and implement that on the User model.
";;;;;;
"
10283592"",""No"",""";;;;;;
It's not that the repo pattern decouples my application more then i'm;;;;;;
doing now
;;;;;;
;;;;;;
;;;;;;
"Your application is tightly coupled to SQL database components (which are, in effect, acting as your mapper). However, despite this, your design is much more like a Repository than an Active Record approach (which is likely what most of these bloggers you refer to are griping about).
";;;;;;
;;;;;;
Active records encapsulate not only the data, but also the database access:
;;;;;;
;;;;;;
$user = new User();;;;;;
$user->setUsername('jane');;;;;
$user->setEmail('jane@foo.bar');;;;;
$user->save();;;;;
;;;;;;
;;;;;;
"It's nicer to have the record objects be unaware of the persistence layer (separation of concerns). Your """"base"""" does just that by returning arrays of user data and when those arrays are modified, they must be passed back to the user """"base"""" for saving. You could change your naming to:
";;;;;;
;;;;;;
class UserRepo extends BaseRepo;;;;;;
{;;;;;;
// user-specific repo code...;;;;;;
};;;;;;
;;;;;;
$userRepo = $this->getRepo('User');;;;;
$user = $userRepo->getUserByName('jane');;;;;
$user['email'] = 'jane@new.email';;;;;;
$userRepo->save($user);;;;;
;;;;;;
;;;;;;
There's nothing wrong with having a base repo.
;;;;;;
"
10283592"",""No"",""If you are trying to learn good MVC practices from PHP frameworks, then you are doing it wrong. Even the best frameworks in PHP are riddled with design mistakes and flaws.
";;;;;;
;;;;;;
"The frameworks what have """"BaseModel"""" usually are implementing some sort of ORM. And the most popular pattern here is ActiveRecord. It is nice for simple tables, with no relations to others (basically - glorified getters/setters). But starts to break down, when dealing with more complex structures and queries. Usually causing extensive technical debt.
";;;;;;
;;;;;;
"The other reason, why this approach causes problem,s is that your """"model"""" in this case has too may responsibilities. Your business logic is tightly bound to the storage, and storage is welded to the logic. As the application grows, such """"models"""" will accumulate hacks.
";;;;;;
;;;;;;
"And no, the """"bunch of bloggers"""" are not sheep. They just have read books about application architecture and object oriented programming. When was last time you read a book on this subject ?
";;;;;;
"
10283592"",""No"",""""";;;;;;
"10283592,""No"",""A BaseModel in PHP MVC, good or bad?""";;;;;;
"10734197,""No"",""Which one of these is the best ORM for PHP in terms of performance? I'd like to use it in Codeigniter framework as well. I'm trying php-activerecord right now, and it doesn't act bad. I took a look to Doctrine2, DataMapper and stuff, but I cannot tell anything about performances until I build a big project (and at that time, it would be too late to change my mind).
";;;;;;
;;;;;;
Any thoughts?
;;;;;;
"
10734197"",""No"",""When you start a new project you can get Objects in and out of the database fast, without worrying how you do it. You can also switch DBMS very fast from SQLite on your local dev machine, to MySQL on your testing or staging servers. When the performance part kicks in, your application has already matured a bit, models are somewhat fixed and programlogic is running. Extending the models to use SQL instead of the ORM is more convenient then, because the structure of the project isn't changing (so fast) anymore.
";;;;;;
;;;;;;
I suggest doctrine.
;;;;;;
;;;;;;
Integration into CI is painless with Doctrine (there are many posts available on the internet and even some here on SO) so you don't have to learn any fancy new conventions.
;;;;;;
;;;;;;
Whetever you chose, keep in mind that any ORM will add a massive (compared to what CI's base footprint is) overhead to a very very light-weight framework. So you sacrifice the light-weightness for powerfull database features and more abstraction.
;;;;;;
;;;;;;
check this link i found out gives very good info ;;;;;;
"http://www.phpandstuff.com/articles/codeigniter-doctrine-from-scratch-day-1-install-and-setup
";;;;;;
"
10734197"",""No"",""Here is a link to GAS ORM vs PHP Active Record";;;;;;
(scroll to the bottom)
;;;;;;
;;;;;;
Result? Gas ORM is way more efficient than PHP Active Record
;;;;;;
"
10734197"",""No"",""If your goal is performance, then use of ORM is the wrong choice to begin with.
";;;;;;
;;;;;;
"ORMs are focused on forcing relational structure to act like objects, which is the source of the problem (the loss of performance, and limitations of API). This is why performance is the thing on which ORMs are NOT focused on. What ORMs are really good at is fast prototyping, but when used in large projects, they usually end up causing technical debt.
";;;;;;
;;;;;;
"Also .. if you are serious about using CodeIgniter, please, read the source, and decide, if this is the quality of code you want to base your project on.
";;;;;;
;;;;;;
P.S. here are two articles you might find a bit inflammatory, but with relevant points:
;;;;;;
;;;;;;
;;;;;;
"
10734197"",""No"",""""";;;;;;
"10734197,""No"",""PHP best orm in terms of performance?""";;;;;;
"11542153,""No"",""UPDATE:";;;;;;
I've edited the title and added this text to better explain what I'm trying to achieve: I'm trying to create a new application from the ground up, but don't want the business layer to know about the persistence layer, in the same way one would not want the business layer to know about a REST API layer. Below is an example of a persistence layer that I would like to use. I'm looking for good advice on integrating with this i.e. I need help with the design/architecture to cleanly split the responsibilities between business logic and persistence logic. Maybe a concept along the line of marshalling and unmarshalling of persistence objects to domain objects.
;;;;;;
;;;;;;
"From a SLICK (a.k.a. ScalaQuery) test example, this is how you create a many-to-many database relationship. This will create 3 tables: a, b and a_to_b, where a_to_b keeps links of rows in table a and b.
";;;;;;
;;;;;;
"object A extends Table[(Int, String)](""""a"""") {";;;;;;
" def id = column[Int](""""id"""", O.PrimaryKey)";;;;;;
" def s = column[String](""""s"""")";;;;;;
def * = id ~ s;;;;;;
def bs = AToB.filter(_.aId === id).flatMap(_.bFK);;;;;;
};;;;;;
;;;;;;
"object B extends Table[(Int, String)](""""b"""") {";;;;;;
" def id = column[Int](""""id"""", O.PrimaryKey)";;;;;;
" def s = column[String](""""s"""")";;;;;;
def * = id ~ s;;;;;;
def as = AToB.filter(_.bId === id).flatMap(_.aFK);;;;;;
};;;;;;
;;;;;;
"object AToB extends Table[(Int, Int)](""""a_to_b"""") {";;;;;;
" def aId = column[Int](""""a"""")";;;;;;
" def bId = column[Int](""""b"""")";;;;;;
def * = aId ~ bId;;;;;;
" def aFK = foreignKey(""""a_fk"""", aId, A)(a =>"; a.id);;;;;
" def bFK = foreignKey(""""b_fk"""", bId, B)(b =>"; b.id);;;;;
};;;;;;
;;;;;;
(A.ddl ++ B.ddl ++ AToB.ddl).create;;;;;;
A.insertAll(1 ->" """"a"""", 2 ->";" """"b"""", 3 ->";" """"c"""")";;;
B.insertAll(1 ->" """"x"""", 2 ->";" """"y"""", 3 ->";" """"z"""")";;;
AToB.insertAll(1 -> 1, 1 -> 2, 2 -> 2, 2 -> 3);;
;;;;;;
val q1 = for {;;;;;;
a <#NOME?;2;;;;
b <#NOME?;;;;;
} yield (a.s, b.s);;;;;;
q1.foreach(x =>" println("""" """"+x))";;;;;
"assertEquals(Set((""""b"""",""""y""""), (""""b"""",""""z"""")), q1.list.toSet)";;;;;;
;;;;;;
;;;;;;
As my next step, I would like to take this up one level (I still want to use SLICK but wrap it nicely), to working with objects. So in pseudo code it would be great to do something like:
;;;;;;
;;;;;;
objectOfTypeA.save();;;;;;
objectOfTypeB.save();;;;;;
linkAtoB.save(ojectOfTypeA, objectOfTypeB);;;;;;
;;;;;;
;;;;;;
Or, something like that. I have my ideas on how I might approach this in Java, but I'm starting to realize that some of my object-oriented ideas from pure OO languages are starting to fail me. Can anyone please give me some pointers as to how approach this problem in Scala.
;;;;;;
;;;;;;
For example: Do I create simple objects that just wrap or extend the table objects, and then include these (composition) into another class that manages them?
;;;;;;
;;;;;;
Any ideas, guidance, example (please), that will help me better approach this problem as a designer and coder will be greatly appreciated.
;;;;;;
"
11542153"",""No"",""A good solution for simple persistence requirements is the ActiveRecord pattern: http://en.wikipedia.org/wiki/Active_record_pattern . This is implemented in Ruby and in Play! framework 1.2, and you can easily implement it in Scala in a stand-alone application
";;;;;;
;;;;;;
The only requirement is to have a singleton DB or a singleton service to get a reference to the DB you require. I personally would go for an implementation based on the following:
;;;;;;
;;;;;;
;;;;;;
- A generic trait ActiveRecord
;;;;;;
- A generic typeclass ActiveRecordHandler
;;;;;;
;;;;;;
;;;;;;
Exploiting the power of implicits, you could obtain an amazing syntax:
;;;;;;
;;;;;;
trait ActiveRecordHandler[T]{;;;;;;
;;;;;;
def save(t:T):T;;;;;;
;;;;;;
def delete[A<:Serializable](primaryKey:A):Option[T];;;;;
;;;;;;
def find(query:String):Traversable[T];;;;;;
};;;;;;
;;;;;;
object ActiveRecordHandler {;;;;;;
// Note that an implicit val inside an object with the same name as the trait ;;;;;;
// is one of the way to have the implicit in scope.;;;;;;
implicit val myClassHandler = new ActiveRecordHandler[MyClass] {;;;;;;
;;;;;;
def save(myClass:MyClass) = myClass;;;;;;
;;;;;;
def delete[A <: Serializable](primaryKey: A) = None;;;;;
;;;;;;
" def find(query: String) = List(MyClass(""""hello""""),MyClass(""""goodbye""""))";;;;;;
};;;;;;
};;;;;;
;;;;;;
trait ActiveRecord[RecordType] {;;;;;;
self:RecordType=>;;;;;
;;;;;;
;;;;;;
def save(implicit activeRecordHandler:ActiveRecordHandler[RecordType]):RecordType = activeRecordHandler.save(this);;;;;;
;;;;;;
def delete[A<:Serializable](primaryKey:A)(implicit activeRecordHandler:ActiveRecordHandler[RecordType]):Option[RecordType] = activeRecordHandler.delete(primaryKey);;;;;
};;;;;;
;;;;;;
case class MyClass(name:String) extends ActiveRecord[MyClass] ;;;;;;
;;;;;;
object MyClass {;;;;;;
def main(args:Array[String]) = {;;;;;;
" MyClass(""""10"""").save";;;;;;
};;;;;;
};;;;;;
;;;;;;
;;;;;;
With such a solution, you only need your class to extends ActiveRecord[T] and have an implicit ActiveRecordHandler[T] to handle this.
;;;;;;
;;;;;;
"There is actually also an implementation: https://github.com/aselab/scala-activerecord which is based on similar idea, but instead of making the ActiveRecord having an abstract type, it declares a generic companion object.
";;;;;;
;;;;;;
;;;;;;
;;;;;;
A general but very important comment on the ActiveRecord pattern is that it helps meet simple requirements in terms of persistence, but cannot deal with more complex requirements: for example is when you want to persist multiple objects under the same transaction.
;;;;;;
;;;;;;
If your application requires more complex persistence logic, the best approach is to introduce a persistence service which exposes only a limited set of functions to the client classes, for example
;;;;;;
;;;;;;
def persist(objectsofTypeA:Traversable[A],objectsOfTypeB:Traversable[B])
;;;;;;
;;;;;;
Please also note that according to your application complexity, you might want to expose this logic in different fashions:
;;;;;;
;;;;;;
;;;;;;
- as a singleton object in the case your application is simple, and you do not want your persistence logic to be pluggable
;;;;;;
"- through a singleton object which acts as a sort as a """"application context"""", so that in your application at startup you can decide which persistence logic you want to use.
";;;;;;
- with some sort of lookup service pattern, if your application is distributed.
;;;;;;
;;;;;;
"
11542153"",""No"",""The best idea would be to implement something like data mapper pattern. Which, in contrast to active record, will not violate SRP.
";;;;;;
;;;;;;
Since I am not a Scala developer, I will not show any code.
;;;;;;
;;;;;;
The idea is following:
;;;;;;
;;;;;;
;;;;;;
- create domain object instance
;;;;;;
- set conditions on the element (for example
setId(42), if you are looking for element by ID) ;;;;;;
- create data mapper instance
;;;;;;
- execute
fetch() method on the mapper by passing in domain object as parameter ;;;;;;
;;;;;;
;;;;;;
The mapper would look up current parameters of provided domain object and, based on those parameters, retrieve information from storage (which might be SQL database, or JSON file or maybe a remote REST API). If information is retrieved, it assigns the values to the domain object.
;;;;;;
;;;;;;
Also, I must note, that data mappers are created for work with specific domain object's interface, but the information, which they pass from domain object to storage and back, can be mapped to multiple SQL tables or multiple REST resources.
;;;;;;
;;;;;;
This way you can easily replace the mapper, when you switch to different storage medium, or even unit-test the logic in domain objects without touching the real storage. Also, if you decide to add caching at some point, that would be just another mapper, which tried to fetch information from cache, and, if it fails, the mapper for persistent storage kicks in.
;;;;;;
;;;;;;
Domain object (or, in some cases, a collection of domain objects) would be completely unaware of whether it is stored or retrieved. That would be the responsibility of the data mappers.
;;;;;;
;;;;;;
"If this is all in MVC context, then, to fully implement this, you would need another group of structures in the model layer. I call them """"services"""" (please share, of you come up with better name). They are responsible for containing the interaction between data mappers and domain objects. This way you can prevent the business logic from leaking in the presentation layer (controllers, to be exact), and these services create a natural interface for interaction between business (also know as model) layer and the presentation layer.
";;;;;;
;;;;;;
P.S. Once again, sorry that I cannot provide any code examples, because I am a PHP developer and have no idea how to write code in Scala.
;;;;;;
;;;;;;
"P.P.S. If you are using data mapper pattern, the best option is to write mappers manually and not use any 3rd party ORM, which claims to implement it. It would give you more control over codebase and avoid pointless technical debt [1] [2].
";;;;;;
"
11542153"",""No"",""""";;;;;;
"11542153,""No"",""How do I abstract the domain layer from the persistence layer in Scala""";;;;;;
"13051281,""No"",""Lately I have upgraded my application work in an event driven architecture using Spring3.1
";;;;;;
;;;;;;
I was wonder what do you think:
;;;;;;
;;;;;;
;;;;;;
having a DAO instance in each class which has the need of inserting/updating/etc record in the DB.(regular way)
;;;;;;
shall I send messages to DAO(via jms/channels/whatever) and the message's content will be the instructions of what I should do (inserting/updating/etc record in the DB)
;;;;;;
;;;;;;
;;;;;;
how way number 2 is good in a loose coupling manners?
;;;;;;
;;;;;;
maybe it's overkill ?
;;;;;;
;;;;;;
this or any other suggestions or advice are welcome.
;;;;;;
;;;;;;
thanks.;;;;;;
ray.
;;;;;;
"
13051281"",""No"",""Loose coupling doesn't mean """"adding"""" more concrete layers to your application (a message queue etc.). If the """"service"""" implementation classes interact with the DAO layer via interfaces (Spring DAO bean injection is a perfect use case which comes to mind), you are pretty much operating at an abstract level.
";;;;;;
;;;;;;
If you then swap out the concrete DAO classes injection with a messaging client which posts message to another service, your code will continue to function as it was previously without significant changes. Of course, there is always a disconnect between the blocking/non-blocking approach but nothing which a good abstraction can't solve. My suggestion would be to look into framework/libraries like Guice for creating the initial draft/refactor of your application as opposed to adding new layers. If then, at some point you feel that non-blocking DB calls are the way to go, you can implement them easily. Putting that logic upfront would just increase the technical debt.
;;;;;;
"
13051281"",""No"",""""";;;;;;
"13051281,""No"",""Architecture event driven advice""";;;;;;
"13400532,""No"",""Currently we use SVN for our source control. Because of the extra features and integration in the development environment we would like to migrate to TFS 2012.
";;;;;;
;;;;;;
We have a lot of portals running that are build in asp.net. Within our portal we use a lot of standard components. Currently all portals use the same code base. This means that whenever we change something in the shared codebase it is (whenever a portal is published) automatically distributed. We are very used to this way of working and we know there is a risk of breaking code in other portals. Though, publishing changes in all other portals would cost way to much time. So to do this we use externals in SVN.
;;;;;;
;;;;;;
I would really like to keep this functionality up and running. So my question is, is there a way to create a external like system in SVN or is there a realy good way that works just as efficient to replace this functionality.
;;;;;;
"
13400532"",""No"",""There are a couple of suggestion in the Visual Studio Team Foundation Server Branching and Merging Guide.
";;;;;;
;;;;;;
"If you download the """"Everything"""" package and look in """"All Guides"""" zip and have a read of """"Advanced Version Control Guide"""".
";;;;;;
;;;;;;
Pages 5-19 (Version 2.1) cover Managing Shared Resources, there's a lot there and summarising it all for Stack Overflow will probably do the Ranger's an injustice, so I'll just point you there.
;;;;;;
"
13400532"",""No"",""Bottom line: No TFS does not have an equivalent to """"svn:externals"""".
";;;;;;
;;;;;;
Code Sharing is BAD and leads to code duplication and not code reuse. Be dependant on compiled code instead.
;;;;;;
;;;;;;
"You should be dependant only on the """"output"""" of the shared library and not on the source files. As to scenarios, I can think of no scenarios where it is advisable to share source files between products/solutions.
";;;;;;
;;;;;;
The reason being is that things can get complicated an unwieldy very quickly. What if you have move than one dependant to a shared library that all make changed to the same code and they alternately break each other. The only way around that is to start branching your common code into the other projects which now adds a level of complexity and integration that will never get handled in the long run and you will end up with three or more versions of the same code base over time.
;;;;;;
;;;;;;
What you should do is have a single core component that is changed and build to produce output. This output can then be pulled on demand into the other projects that depend on those changes or not. This results in fewer breakages, better architecture and less technical debt.
;;;;;;
;;;;;;
You can even introduce NuGet into the equation using either a hosted on in-house server to publish your common component and notify each of the consumers when a new version is available.
;;;;;;
"
13400532"",""No"",""""";;;;;;
"13400532,""No"",""SVN externals alternative in team foundation server 2012""";;;;;;
"13834806,""No"",""I have a base applicaiton that will evolve. Right now UI includes BLL. DAL is a separate library that serves its purpose.
";;;;;;
;;;;;;
"I dont have time to do everything right now, so i want to bypass patterns that help with decoupling (IoC , DI as i have been proposed here).
";;;;;;
;;;;;;
I would like to create my BLL and have a reference for the DAL directly. This will give me the opportunity to start creating separate UIs that i need now.
;;;;;;
;;;;;;
My question is can i do it? Can i focus right now in creating my 3 layers and gradually apply design patterns to make my code better?
;;;;;;
;;;;;;
Added Info:
;;;;;;
;;;;;;
I have the luxury of the time because my first app will not be used during the development of the second one. So i will have the time to optimize my coding structure. The question what i can do know to split UI into UI + BLL as effective as i can . On my mind is that i will just move the DAL init in BLL and put in UI the BLL init. Is there something else i can do that it will help me more when applying IoC/DI later on?
;;;;;;
"
13834806"",""No"",""I like very much the """"debt"""" metaphore in this case. The quicker you push to deliver your first working version and compromise on code quality and engineering best practices, the longer and harder it is going to be to get any new change requests implemented.
";;;;;;
;;;;;;
"It is up to you to decide how much """"in debt"""" you want to get. Think that creating a product from scratch is a unique opportunity to show your qualities, if you deliver it in say one month but need five months to implement a few simple change requests your credibility and reputation will suffer and they will replace you.
";;;;;;
"
13834806"",""No"",""You can set up """"poor man's dependency injection"""", using this kind of structure:
";;;;;;
;;;;;;
public class MyEndClass;;;;;;
{;;;;;;
private readonly IDependency dependency;;;;;;
;;;;;;
public MyEndClass(IDependency dependency);;;;;;
{;;;;;;
this.dependency = dependency;;;;;;
};;;;;;
;;;;;;
// temporary constructor until we introduce IoC;;;;;;
public MyEndClass() : this(new Dependency());;;;;;
{;;;;;;
};;;;;;
};;;;;;
;;;;;;
"
13834806"",""No"",""Getting a working product quickly can often be the most important thing so taking a concious decision to skip over some engineering practices may be the right one.
";;;;;;
;;;;;;
However, you need to ensure you are making the right trade-offs. Refactoring is not free and dealing with technical debt needs to be planned for.
;;;;;;
;;;;;;
The path of least resistance once the end users see your initial version is usually to keep adding functionality over revisiting initial design decisions.
;;;;;;
;;;;;;
To put it another way, once version 1.0 is in the wild you will have a hard job persuading management that you need to spend a large number of man days reworking things under the hood for no perceivable change or benefit to the customer.
;;;;;;
;;;;;;
Without knowing the details of your app or requirements it's impossible to give concrete advice. In general though spending some time up front thinking about the design is orders of magnitude quicker and simpler than trying to do the same thing some way into development.
;;;;;;
"
13834806"",""No"",""""";;;;;;
"13834806,""No"",""Develop layers first having in mind transition to Dependency Inversion principle and Inversion of Control at a later stage?""";;;;;;
"15083342,""No"",""Modern MVC frameworks have their own implementation of data access layers that do not require SQL statements to be written. In terms of performance and scalability, are there any drawbacks, for instance, when using
";;;;;;
;;;;;;
$user = User::where('email', '=', $email)->first(); ;;;;
;;;;;;
;;;;;;
instead of using prepared statements in raw SQL like
;;;;;;
;;;;;;
$user = DB::connection()->pdo->"prepare(""""SELECT * from users where `email` = ? """" ) ";;;;
;;;;;;
;;;;;;
Since MVC frameworks like Laravel and Cakephp also allow the latter approach, I am not sure which of the two method is better in terms of performance and scalability.
;;;;;;
"
15083342"",""No"",""of course you will always have the overhead of running through a class and assembling the query.
";;;;;;
;;;;;;
"Yet it helps you to prevent errors. Typos like """"were id ="""" cant happen(or shouldnt). Except from that those layers already do a lot of stuff for you.
";;;;;;
;;;;;;
Like escaping, parsing, validating etc... so take the overhead but be sure a lot of failures or security issues wont happen
;;;;;;
"
15083342"",""No"",""Yes, there are drawbacks both in terms of performance and scalability.
";;;;;;
;;;;;;
All these ORMs and ARs are quite good only with basic queries.
;;;;;;
But when it comes to some complex issues, they become either unbearable complex or merely helpless.
;;;;;;
"There is no way to inject """"USE INDEX"""", """"DELAYED"""" or the like performance-boosting commands in these sleek operators.
";;;;;;
;;;;;;
Same goes for scalability.
;;;;;;
Every time you're going to use whatever non-standard operator, you gonna scratch your head.
;;;;;;
;;;;;;
There is also a portability issue.
;;;;;;
SQL is a lingua franca for the web-dewelopers, everuone could read and write it. ;;;;;;
While proprietary ORM can put them in a fix.
;;;;;;
;;;;;;
Nevertheless, your second code is no less ugly and unusable.
;;;;;;
;;;;;;
$user = DB::connection()->pdo->"prepare(""""SELECT * from users where email=?"""")";;;;
;;;;;;
;;;;;;
DB::connection()->pdo->prepare() does not return no users. It returns a statement handle which have to be used in the following several lines to get the actual user info.
;;;;
Adding tons of useless code in your scripts.
;;;;;;
And it's ordinal case with select from scalar. Try it with INSERT or a mere IN() statement and your code will be blown up to several screens high.
;;;;;;
;;;;;;
Why not to make it to really get user info?
;;;;;;
;;;;;;
$user = DB::conn()->"getRow(""""SELECT * from users where email=?s"""",$email)";;;;;
;;;;;;
;;;;;;
Look - you keep your SQL yet made it usable.
;;;;;;
"
15083342"",""No"",""";;;;;;
" Rant:
What you call """"modern MVC frameworks"""" (with few exceptions) are nowhere close implementing MVC. And those """"layers that do not require SQL statements"""" are actually extremely harmful in large scale projects(where MVC should be actually used).
";;;;;;
;;;;;;
;;;;;;
"My advice would be to avoid use of any built-in ORM or query-builder. The ORMs that so-called """"mvc frameworks"""" are bundled with are usually implementations of active record, which has extremely limited use-case. Basically, AR based implementations for domain entities are pragmatic only if you are using just the basic CRUD operations (no JOINs or other above-beginner level sql queries) and only simple attribute validation (no cross-checked fields or interactions with other entities). Technically you can use active record instances in more complicated cased, but then you ill start to incur technical debt.
";;;;;;
;;;;;;
"The best option would be to separate the domain logic from storage logic and implement domain objects and data mappers for each of the aspect of model layer respectively.
";;;;;;
"
15083342"",""No"",""""";;;;;;
"15083342,""No"",""SQL statements vs MVC data access layer in PHP""";;;;;;
"17455303,""No"",""I have a web app with HTML tables containing input boxes everywhere in it. I want to be able to add rows to these tables from the C# side of things. To accomplish this, I use an asp button that calls this method:
";;;;;;
;;;;;;
private void AddRow();;;;;;
{;;;;;;
HtmlTableRow tRow = new HtmlTableRow();;;;;;
;;;;;;
HtmlTableCell cell = new HtmlTableCell();;;;;;
HtmlTableCell cell2 = new HtmlTableCell();;;;;;
HtmlTableCell cell3 = new HtmlTableCell();;;;;;
HtmlTableCell cell4 = new HtmlTableCell();;;;;;
;;;;;;
" cell.InnerHtml = """"<";"input type=\""""text\"""" size=\""""29\"""">";;;;;
tRow.Cells.Add(cell);;;;;;
;;;;;;
" cell2.InnerHtml = """"<";"input type=\""""text\"""" size=\""""9\"""" class=\""""units commas\"""" />";;;;;
" cell2.Attributes.Add(""""class"""", """"leftBorder textCenter"""")";;;;;;
tRow.Cells.Add(cell2);;;;;;
;;;;;;
" cell3.InnerHtml = """"<";"input type=\""""text\"""" size=\""""8\"""" class=\""""value commas\"""" />";;;;;
" cell3.Attributes.Add(""""class"""", """"leftBorder textCenter"""")";;;;;;
tRow.Cells.Add(cell3);;;;;;
;;;;;;
" cell4.InnerHtml = """"<";"input type=\""""text\"""" size=\""""11\"""" readonly=\""""readonly\"""" tabindex=\""""-1\"""" class=\""""totalA1 autoTotal\"""" />"; ;;
" cell4.Attributes.Add(""""class"""", """"leftBorder rightBorder textRight"""")";;;;;;
tRow.Cells.Add(cell4);;;;;;
;;;;;;
someTable.Rows.Add(tRow);;;;;;
};;;;;;
;;;;;;
;;;;;;
This works beautifully...for exactly one row. I can click the button, it adds a row. If I click it again, it doesn't do anything. More specifically, I suspect it's removing the currently added row, restoring the document to the 'default' state, and then adds a row (effectively doing nothing).
;;;;;;
;;;;;;
Assuming I'm right, I need to somehow be able to append a row to another dynamically created row, instead of just replace it. If I'm not right, I just need a means to be able to continually add rows on a button press.
;;;;;;
;;;;;;
How would I go about doing this?
;;;;;;
;;;;;;
EDIT: I should specify, all this could be done in a loop, all at once. I was hoping to get it to work on a button press just for the sake of testing, but it can all be neatly tucked into a loop of some kind. I've had (some) success dropping it in one.
;;;;;;
"
17455303"",""No"",""The sentiments expressed by the other answers is completely correct in that this architecture is very flawed (it does things the way they were done in the mid 2000's, a lifetime ago in web). That being said, you have constraints and you can't change them. Here are two things I would check, one of which was already suggested in a comment:
";;;;;;
;;;;;;
;;;;;;
Postback is interfering with table initialization. When a visitor first loads your aspx page, the browser will perform an HTTP GET request to your page. The IsPostback property of your aspx page class will be false. When the user clicks your button, they will make a POST request to your aspx page, passing along a bunch of variables for the current state of the page if they have modified it on their browser as well as a .NET-specific set of properties indicating what was pressed and what event handler the server should execute (in your case, the event handler calling AddRow will be called, but only after Page_Load is executed first). This is why they suggested wrapping your Page_Load logic in an if(!IsPostback){}.
;;;;;;
"ViewState is not enabled for the HtmlTable control. The ViewState is the .NET-specific implementation of serializing all that the server knows about the HTML (input boxes, etc.) into a hidden field in your output HTML. If you want .NET to remember what the state of various HTML tags are (what they contain, what the user filled out in each, etc.), then they need to have an entry in the ViewState, which is passed from postback to postback. Be warned, though, as soon as someone refreshes a page without a button click or using back and forward buttons on the browser (because remember they correspond to HTTP GET requests, not HTTP POST requests) the viewstate will be reinitialized anyways. The only way around this is stuffing stuff into the Session. If you're ok with that, then to enable the viewstate, use the .EnableViewState() method on the HtmlTable control, although keep in mind this will increase your web app's page size since .NET will serialize what the table contains into a string and put it in a hidden input variable. This will make it """"easy"""" to write all the server side logic, but at a huge cost if the table becomes very big.
";;;;;;
;;;;;;
;;;;;;
It may seem right now that the path of least resistance is to just make the existing infrastructure work, but believe me you are incurring ever increasing amounts of technical debt that either you or someone else will have to pay up down the line. I would highly recommend moving to a client-side add row method, it's the only way around this postback-viewstate-model madness in webforms.
;;;;;;
"
17455303"",""No"",""The absolute best way to do what you need is client side, NOT server side. For what you're doing, it makes no sense to post back to the server EACH TIME you insert a new row (in my opinion). Is there a specific reason you must do this server side (C#)?
";;;;;;
;;;;;;
Use Javascript (jQuery) to dynamically insert new rows into your table upon the button click. The page won't refresh, it's faster, and makes for a LOT better user experience.
;;;;;;
;;;;;;
"I assume you're familiar with Javascript, but you can tie a client side event to your server side button - just add onclientclick=""""insertRow()"; where onclientclick is an attribute of the button control and insertRow() is a Javascript method defined in your page between <script></script> tags.
;
;;;;;;
If you need me to write out an example on how to do this, please let me know and I can edit my answer.
;;;;;;
"
17455303"",""No"",""""";;;;;;
"17455303,""No"",""Cannot add more than one row to a table dynamically using ASP/C#""";;;;;;
"17771512,""No"",""After working Scrum(ish) in a previous workplace, I am trying to implement it in my new place of work for a brand new project (I am no scrum expert). We have some pre-requisites to code before we can begin working on the stories (which are being groomed in the mean time). Things like database design, api design, etc. We plan to use two week iterations and it's just not clear to me how the first one (or two) can provide something useful to the customer and """"potentially shippable"""" if we first have to """"lay down some groundwork"""" ? Any ideas on how to treat this?
";;;;;;
"
17771512"",""No"",""What you are experiencing is very typical of new teams wanting to move to Scrum where they are coming from more of a traditional process. Adapting to Scrum is very, very hard and we always say this, and the reason for this is there needs to be many mindset changes.
";;;;;;
;;;;;;
The first change the team should understand is that when bringing a PBI (requirement) into a Sprint, it only a well defined requirement with nothing else. This means there is no designs, database schemas or API's for the requirement. The team has to do all of this in the sprint, plus build and test the requirement.
;;;;;;
;;;;;;
If you are new to Scrum, you most probably are squirming in your seat thinking it cannot be done. You are likely right for now, but this is where the hard work comes in ... changing the way teams work. This is hard.
;;;;;;
;;;;;;
Some pointers :-
;;;;;;
;;;;;;
Small Requirements - Most teams suffer from poor, ambiguous requirements which previously took days to design, build and test. The art is to learn to break these EPIC requirements down into smaller incremental requirements where each one builds upon the previous, but explicitly adds business value. Now, I am going to be blunt here ... this is the biggest challenge for most teams. Personally, I have been training/coaching Scrum for a number of years now have not found any feature that cannot be broken down into small requirements with an average estimate of 2-3 days to fully complete.
;;;;;;
;;;;;;
Team composition - The team needs people in it with all the skills necessary to design, build and test the PBI. They should not have dependencies on other people outside of the team. Having dependencies, cripples teams but it highlights to management there are not enough people with the specialised skills.
;;;;;;
;;;;;;
Sprint Planning - Sprint planning should be used to do high level designs and discuss how the team is going to tackle delivering each requirement. Many teams waste their sprint planning by clarifying weak requirements and debating the requirement. This is a sign of weak requirements and it should be addressed. Sprint planning is about discussing How to build/test a PBI and not What.
;;;;;;
;;;;;;
Coach - I would really recommend you hire an experienced contract coach/consultant to get you going and do things right. Trying to do this by yourself, just leads to a world of unnecessary pain.
;;;;;;
;;;;;;
Architecture - At the inception of the project, there is nothing wrong for the team and architects to spend a day or two brainstorming the macro architecture of the product and discussing the technologies to be used. However, when it comes to new requirements they are designed and adjusted into the product. This sounds hard, but with the correct software engineering patterns using SOLID principles, well defined patterns as well as strong Continuous Integration and Unit Testing. The risks of a bad architecture are eliminated. There is not question that the team should have a member in it that has the skills to design an architect the new requirements. [There is lots of evidence on the web that an evolving architecture with re-factoring results in a better application than a big upfront architecture - but that another debate]
;;;;;;
;;;;;;
Application Lifecycle Management - Invest in strong ALM tooling with CI, unit testing, test lab, continuous deployment. Having the right tools for the team allows you to deliver quickly, and a lack of these totally cripples you. CI with automated testing is essential for an incremental product as there is fast and constant change and you want to protect that a change does not break a previous requirement.
;;;;;;
;;;;;;
ScrumBut - Ken and Jeff no longer support the use of the term ScrumBut as it is perceived as elitism and often comes across as belittling. Instead it is preferred that teams are on the journey to implementing Scrum and helping them through coaching.
;;;;;;
;;;;;;
"Welcome to your journey into Scrum, hang in there as it is very hard initially. Once you fully """"get it"""", then you and your company will be really happy that you did.
";;;;;;
"
17771512"",""No"",""In an ideal world, Technical pre-requisites should be factored into the estimate of each story and you should only implement """"just enough"""" to complete the story. See """"Do The Simplest Thing That Could Possibly Work""""
";;;;;;
;;;;;;
"Why do you need to design the API or the Database? Try to avoid Big Up front design. Avoid building Frameworks up front, apply YAGNI
";;;;;;
;;;;;;
It's hard for you to understand how you could ship something in two weeks because you have the cart before the horse; that is, your priorities are wrong. The important thing is delivering customer software - not building databases or API designs.
;;;;;
;;;;;;
"This is a trade of against long term productivity and you should avoid accruing too much technical debt. Many Agile methodologies would argue that up-front work like this will be wrong and therefore should be avoided to minimise waste. Lean software recommends defering decisions to the Last Responsible Moment.
";;;;;;
"
17771512"",""No"",""""";;;;;;
"17771512,""No"",""Implementing scrum-but for first time: how to deal with technical pre-requisites?""";;;;;;
"18610541,""No"",""A Repository as defined by Martin Fowler is supposed to act like an in-memory domain object collection. This allows the application (in theory) to be ignorant of the persistence mechanism.
";;;;;;
;;;;;;
So under normal circumstances you'd have something like this:
;;;;;;
;;;;;;
public void MyBusinessLogicMethod () {;;;;;;
...;;;;;;
IRepository<Customer> repository = myIocContainer.Resolve<IRepository<Customer>>()
repository.Add(customer);;;;;;
};;;;;;
;;;;;;
;;;;;;
If however you have a series of inserts/updates that you wish to do and want a mechanism to roll back should any of them fail you'd need some sort of UnitOfWork implementation:
;;;;;;
;;;;;;
public void MyBusinessLogicMethod () {;;;;;;
...;;;;;;
using (IUnitOfWork uow = new UnitOfWork()){;;;;;;
IRepository<Customer> customerRepo = myIocContainer.Resolve<IRepository<Customer>>(uow)
customerRepo.Add(customer); ;;;;;
;;;;;;
IRepository<Order> orderRepo = myIocContainer.Resolve<IRepository<Order>>(uow)
orderRepo.Add(order); ;;;;;
;;;;;;
IRepository<Invoice> invoiceRepo = myIocContainer.Resolve<IRepository<Invoice>>(uow)
invoiceRepo.Update(invoice);;;;;;
;;;;;;
uow.Save(); ;;;;;
};;;;;;
};;;;;;
;;;;;;
;;;;;;
However if you had some bizarre requirement that your Customer Repository was acting against a SqlServer database, your Order Repository against a MySql database and your Invoice Repository against a PostgreSQL database, how would you go about handling the Transactions for each database session?
;;;;;;
;;;;;;
Now this is a bit of contrived example for sure but every Repository implementation I've come across seems to know at some level that it's really a particular database and ORM being used.
;;;;;;
;;;;;;
"Imagine another scenario where you have 2 repositories where one is going to a database and the other is calling a web service. The whole point of Repositories is that the application shouldn't care what data source you are going to but without jumping through some massive hoops I don't see how these scenarios can be accounted for without the application knowing at some level """"FYI this is going to data source x so we'd better treat it differently"""".
";;;;;;
;;;;;;
Is there a pattern or implementation that addresses this issue? It seems to me if you are using Database x and ORM y for your entire application then Repositories work splendidly, but if due to technical debt that course deviates then the benefits of repositories are greatly reduced.
;;;;;;
"
18610541"",""No"",""In your UnitOfWork, as suggested, you should use a TransactionScope transaction.";;;;;;
It elevates, in your case, to MSDTC and ensure all enlisted operations are correctly executed before commit or otherwise rollback.
;;;;;;
"
18610541"",""No"",""""";;;;;;
"18610541,""No"",""Mixing Repository implementations for different data sources""";;;;;;
"27914683,""No"",""There were times when organizations used point to point methods to integrate applications which the middle-ware tools helped to avoid by allowing applications to focus on their core area instead of each one of them writing integration logic.
";;;;;;
;;;;;;
Now the applications are saying that they are using web services for integration and not really need a middle-ware. How can i convince applications that middle-ware can still help in this situation ? According to me web services is just an advanced point to point solution. Is SOA Governance the only selling point here ?
;;;;;;
"
27914683"",""No"",""I have some idea about this having seen it in my organization.";;;;;;
Lets take the case of what happened in my organization , we had a common middleware earlier but then later applications just like in the aforesaid case told they wanted no middle ware instead could do their own integration logic .;;;;;;
For e.g Oracle said they could OSB to write their logic and so on .;;;;;;
Then came a time when we had lot of ESBs - OSB , Axis , Message Broker and so on .;;;;;;
With the advent of Federated ESBs(Or applications writing integration logics) came their own problems:
;;;;;;
;;;;;;
1.No singular service registry.
;;;;;;
;;;;;;
2.Standardization of service design was impossible. e.g Service Contracts written and maintained varied from different teams
;;;;;;
;;;;;;
3.Service re-usablity was absolutely lost .
;;;;;;
;;;;;;
4.Technical Debt if OSB version was not supported later - we had to keep seperate migration plans for seperate ESB's
;;;;;;
;;;;;;
Ultimately , we had to go ahead with just one ESB to avoid this mess and technical debt.
;;;;;;
;;;;;;
Moreover , with your case where applications write their own integration logic and if they aren't standard ESBs , I forsee another problem . The problem with implementation and standardizatin of integration design pattern say for e.g Publisher Subcriber.;;;;;;
Hope this helps .
;;;;;;
"
27914683"",""No"",""Yeah, if you're doing point to point web services, you won't have the possibility to control the message traffic.
";;;;;;
;;;;;;
With a middleware ESB, the ESB provides facilities to queue messages and control the rate at which messages are consumed.
;;;;;;
;;;;;;
Furthermore, ESBs (and message broker) provides guaranteed delivery for your messages. So, if you're consumer is down, you don't lose messages.
;;;;;;
;;;;;;
With point to point web services you'd have to implement a queueing mechanism and guaranteed delivery mechanism within each web service. At that point, you're just reinventing the wheel.
;;;;;;
;;;;;;
Furthermore, you're putting much more responsibility on the web service consumer or the web service provider to implement routing/orchestration logic. This can get quite expensive if the end point doesn't belong to you but a third party that you will need to pay for changes.
;;;;;;
;;;;;;
And then what do you do if your company makes an acquisition and the software used by the other company is based on an old technology that doesn't support web services and doesn't have any option to communicate directly with to .NET or Java web service. To make things as painless as possible with minimal impact, the ESB shoulders all the routing/orchestration logic and through its adapters plugs into the other system to retrieve/create/update/delete data.
;;;;;;
;;;;;;
What makes an ESB rich is its library of adapters. They've been around long enough to have an adapter for pretty much any old and new technologies.
;;;;;;
;;;;;;
One of the most important thing to do when studying ESB technologies is to make sure that the adapters that you need for now and the foreseeable future are available
;;;;;;
"
27914683"",""No"",""in the age of the internet of things, and within a business domain the internet of capabilities, middleware is relegated to the ESB. Most internal business interactions want real time point to point. Consider ReST web services for example. If I query a resource I don't want it queue, I want a response. Coupled with the advent of service versioning (that is versioning on the uri) the consumer is insulated for change for a long time. should you modulate or buffer requests? that depends! do your consumer applications mind waiting around for a response for an indeterminate amount of time. The viewpoints involved are really fire and forget vs. real time integration. in the real time integration scenario if I place an order I want an order number now, not later. that's point to point. the service bus has no place in this scenario. so it depends on your methodology. but with the advent of ReST the wheels have fallen of the esBus. one final thought: with all of the networking capabilities we have at our disposal, lpars, VM's, routing"; should we be planning integration based on system unavailability? I don't think so.
;;;;;
"
27914683"",""No"",""""";;;;;;
"27914683,""No"",""point to point vs web services integration""";;;;;;
"28258223,""No"",""Most object oriented analysis and design books and resources describe the process where the analysis phase is followed by identifying classes. I understand that experience will often give you an idea of which architecture (if any) you should apply but is there a specific point in the object oriented design phase where this should occur? I'm about to start a large personal project and I want to make sure my choice of architecture doesn't disregard something from the analysis phase.
";;;;;;
"
28258223"",""No"",""This question implies that architectural patterns are chosen all at once. In an ideal world (where requirements don't change, and where developers can read a client/stakeholder's mind), it might be possible to come up with a huge design up front, and stick to it. That never happens. The only way to come up with software that is both functional and well designed is to constantly refactor as requirements become more clear. And at each stage of refactoring, it's possible that a subsystem requires a different architectural pattern.
";;;;;;
;;;;;;
"Of course, it's important to enter a project with some kind of """"plan of attack"""". But don't expect the design phase to be over once that is completed. No one understands all the requirements up front (even if you're your own client). Things will always change.
";;;;;;
;;;;;;
In short, if you're not choosing architectural patterns throughout the development process, you're either a mind reader, or you're racking up technical debt.
;;;;;;
"
28258223"",""No"",""""";;;;;;
"28258223,""No"",""Where in the object-oriented design process is an architecture pattern chosen?""";;;;;;
"28971227,""No"",""The benefits of using ES6 for Rails frontend are very attractive.
";;;;;;
;;;;;;
"I've made a topic branch in our Rails app that uses babel to transpile ES6 to ES5 via the asset pipeline. It works well, but as always I am weary of technical debt. Is there anyone that has good/bad reports of using such a system in production?
";;;;;;
"
28971227"",""No"",""There is a growing list of users, some are detailed in this issue
";;;;;;
;;;;;;
Where possible babel tries to provide the most performant polyfill for ES6 features and this is backed up by their test suite. However, for some of the problems there are often more performant es5 solutions available, at the expense of code clarity, speed of code production etc etc.
;;;;;;
;;;;;;
In general though, I guess it would be up to your own apps performance testing to dictate whether any lack of performance (if any) is outweighed by speed and ease of development and maintenance.
;;;;;;
;;;;;;
I've only ever used in it simple to intermediate complexity programs (in Node and in the Browser) and never witnessed any performance problems or had any issues updating babel (I may have been lucky with this though). I've used it for stuff like dashboards, filterable lists, data management stuff, other little bits and pieces such as React components. None of it outrageously complex though.
;;;;;;
;;;;;;
I guess the other thing that might be of use to you is to note that the project lead is incredibly active, the project is moving at breakneck speed and issue responses on both github and gitter are quick and informative.
;;;;;;
"
28971227"",""No"",""""";;;;;;
"28971227,""No"",""Is anyone using Babel/6-to-5 in a production Rails app?""";;;;;;
"29242469,""No"",""I've got Entity Framework 4.1 with .NET 4.5 running on ASP.NET in Windows 2008R2. I'm using EF code-first to connect to SQL Server 2008R2, and executing a fairly complex LINQ query, but resulting in just a Count().
";;;;;;
;;;;;;
I've reproduced the problem on two different web servers but only one database (production of course). It recently started happening with no application, database structure, or server changes on the web or database side.
;;;;;;
;;;;;;
"My problem is that executing the query under certain circumstances takes a ridiculous amount of time (close to 4 minutes). I can take the actual query, pulled from SQL Profiler, and execute in SSMS in about 1 second. This is consistent and reproducible for me, but if I change the value of one of the parameters (a """"Date after 2015-01-22"""" parameter) to something earlier, like 2015-01-01, or later like 2015-02-01, it works fine in EF. But I put it back to 2015-01-22 and it's slow again. I can repeat this over and over again.
";;;;;;
;;;;;;
I can then run a similar but unrelated query in EF, then come back to the original, and it runs fine this time - same exact query as before. But if I open a new browser, the cycle starts over again. That part also makes no sense - we're not doing anything to retain the data context in a user session, so I have no clue whatsoever why that comes into play.
;;;;;;
;;;;;;
But this all tells me that the data itself is fine.
;;;;;;
;;;;;;
In Profiler, when the query runs properly, it takes about a second or two, and shows about 2,000,000 in reads and about 2,000 in CPU. When it runs slowly, it takes 3.5 minutes, and the values are 300,000,000 and 200,000 - so reads are about 150 times higher and CPU is 100 times higher. Again, for the identical SQL statement.
;;;;;;
;;;;;;
Any suggestions on what EF might be doing differently that wouldn't show up in the query text? Is there some kind of hidden connection property which might cause a different execution plan in certain circumstances?
;;;;;;
;;;;;;
EDIT
;;;;;;
;;;;;;
The query that EF builds is one of the ones where it builds a giant string with the parameter included in the text, not as a SQL parameter:
;;;;;;
;;;;;;
exec sp_executesql ;;;;;;
N'SELECT [GroupBy1].[A1] AS [C1] ;;;;;;
FROM ( ;;;;;;
SELECT COUNT(1) AS [A1];;;;;;
...;;;;;;
AND ([Extent1].[Added_Time] >= convert(datetime2, ''2015-01-22 00:00:00.0000000'', 121)) ;;;;;
...;;;;;;
) AS [GroupBy1]';;;;;;
;;;;;;
;;;;;;
EDIT
;;;;;;
;;;;;;
I'm not adding this as an answer since it doesn't actually address the underlying issue, but this did end up getting resolved by rebuilding indexes and recomputing statistics. That hadn't been done in longer than usual, and it seems to have cleared up whatever caused the issue.
;;;;;;
;;;;;;
I'll keep reading up on some of the links here in case this happens again, but since it's all working now and unreproduceable, I don't know if I'll ever know for sure exactly what it was doing.
;;;;;;
;;;;;;
Thanks for all the ideas.
;;;;;;
"
29242469"",""No"",""Realizing you are using Entity Framework 4.1, I would suggest you upgrade to Entity Framework 6.
";;;;;;
;;;;;;
There has been a lot of performance improvement and EF 6 is much faster than EF 4.1.
;;;;;;
;;;;;;
"The MSDN article about Entity Framework performance consideration mentioned in my other response has also a comparison between EF 4.1 and EF 6.
";;;;;;
;;;;;;
There might be a bit of refactoring needed as a result, but the improvement in performance should be worth it (and that would reduce the technical debt at the same time).
;;;;;;
"
29242469"",""No"",""Just to put this out there since it has not been addressed as a possibility:
";;;;;;
;;;;;;
Given that you are using Entity Framework (EF), if you are using Lazy Loading of entities, then EF requires Multiple Active Result Sets (MARS) to be enabled via the connection string. While it might seem entirely unrelated, MARS does sometimes produce this exact behavior of something running quickly in SSMS but horribly slow (seconds become several minutes) via EF.
;;;;;;
;;;;;;
One way to test this is to turn off Lazy Loading and either remove MultipleActiveResultSets=True;" (the default is """"false"""") or at least change it to be MultipleActiveResultSets=False";.
;;;;
;;;;;;
As far as I know, there is unfortunately no work-around or fix (currently) for this behavior.
;;;;;;
;;;;;;
"Here is an instance of this issue: Same query with the same query plan takes ~10x longer when executed from ADO.NET vs. SMSS
";;;;;;
"
29242469"",""No"",""I don't have an specific answer as to WHY this is happening, but it certainly looks to be related with how the query is handled more than the query itself. If you say that you don't have any issues running the same generated query from SSMS, then it isn't the problem.
";;;;;;
;;;;;;
A workaround you can try: A stored procedure. EF can handle them very well, and it is the ideal way to deal with potentially complicated or expensive queries.
;;;;;;
"
29242469"",""No"",""There is an excellent article about Entity Framework performance consideration here.
";;;;;;
;;;;;;
I would like to draw your attention to the section on Cold vs. Warm Query Execution:
;;;;;;
;;;;;;
;;;;;;
The very first time any query is made against a given model, the;;;;;;
Entity Framework does a lot of work behind the scenes to load and;;;;;;
validate the model. We frequently refer to this first query as a;;;;;;
" """"cold"""" query. Further queries against an already loaded model are";;;;;;
" known as """"warm"""" queries, and are much faster.
";;;;;;
;;;;;;
;;;;;;
"During LINQ query execution, the step """"Metadata loading"""" has a high impact on performance for Cold query execution. However, once loaded metadata will be cached and future queries will run much faster. The metadata are cached outside of the DbContext and will be re-usable as long as the application pool lives.
";;;;;;
;;;;;;
In order to improve performance, consider the following actions:
;;;;;;
;;;;;;
;;;;;;
- use pre-generated views
;;;;;;
- use query plan caching
;;;;;;
- use no tracking queries (only if accessing for read-only)
;;;;;;
- create a native image of Entity Framework (only relevant if using EF 6 or later)
;;;;;;
;;;;;;
;;;;;;
"All those points are well documented in the link provided above. In addition, you can find additional information about creating a native image of Entity Framework here.
";;;;;;
"
29242469"",""No"",""I recently had a very similar scenario, a query would run very fast executing it directly in the database, but had terrible performance using EF (version 5, in my case). It was not a network issue, the difference was from 4ms to 10 minutes.
";;;;;;
;;;;;;
The problem ended up being a mapping problem. I had a column mapped to NVARCHAR, while it was VARCHAR in the database. Seems inoffensive, but that resulted in an implicit conversion in the database, which totally ruined the performance.
;;;;;;
;;;;;;
I'm not entirely sure on why this happens, but from the tests I made, this resulted in the database doing an Index Scan instead of an Index Seek, and apparently they are very different performance-wise.
;;;;;;
;;;;;;
"![]()
";;;;;;
;;;;;;
"I blogged about this here (disclaimer: it is in Portuguese), but later I found that Jimmy Bogard described this exact problem in a post from 2012, I suggest you check it out.
";;;;;;
;;;;;;
Since you do have a convert in your query, I would say start from there. Double check all your column mappings and check for differences between your table's column and your entity's property. Avoid having implicit conversions in your query.
;;;;;;
"If you can, check your execution plan to find any inconsistencies, be aware of the yellow warning triangle that may indicate problems like this one about doing implicit conversion:
";;;;;;
;;;;;;
"
";;;;;;
"![]()
";;;;;;
;;;;;;
I hope this helps you somehow, it was a really difficult problem for us to find out, but made sense in the end.
;;;;;;
"
29242469"",""No"",""""";;;;;;
"29242469,""No"",""Extremely slow and inefficient query execution from Entity Framework""";;;;;;
"30406828,""No"",""What are the pro's and cons of putting css &"; javascript code in your html file.
;;;;;
;;;;;;
I'm teaching students who are just getting started and not sure if starting with external files would be the best way to start and if just having them add everything in a single file is more beneficial for the early learning stage. What are the pro's and cons?
;;;;;;
"
30406828"",""No"",""Modularizing your code by separating your HTML, CSS, and Javascript code has plenty of benefits. Here’s what comes to mind:
";;;;;;
;;;;;;
If you’re placing all of your code into one file, you’re making it really difficult for other members of your team (or future engineers) to collaborate on your project. If four people are working on a project all off the same index.html file, imagine all the merge conflicts that you’re going to have to resolve over and over again.
;;;;;;
;;;;;;
When debugging, it’s going to be a lot easier to reference a smaller file containing a hundred lines of code than a monolithic one with thousands of lines.
;;;;;;
;;;;;;
It also adds a lot of technical debt to the project — when future engineers who inherit your work finally decide to modularize your file, they’re going to have to spend a lot of time doing so which will leave a negative impression of your work upon them.
;;;;;;
;;;;;;
TL;DR — Always modularize your code! It makes it easier to read, understand, debug, and collaborate.
;;;;;
"
30406828"",""No"",""""";;;;;;
"30406828,""No"",""CSS and Javascript inside single HTML file. Pro's & Cons?""";;;;;;
"32274486,""No"",""I currently work on a Java based web application. Recently we created some REST endpoints using Spring. The reason for this was because we developed a hybrid mobile app that integrates with our main application via these end points.
";;;;;;
;;;;;;
The problem is that going forward we are not quite sure how to handle updates. If we update our API, e.g. we change the method signatures of the end point methods or we change the attributes on the DTOs that we return as JSON, then we would have an issue if our mobile users are running an out dated version of the mobile app.
;;;;;;
;;;;;;
What we want to implement is something that will force our users to update the app if it is out of date. I have seen a lot of mobile apps that do this. So we thought of having an API version for our REST API and then have the mobile app check if the version it is using is the same as the version being run by our server and if not, then force the user to do an update.
;;;;;;
;;;;;;
The problems we have are:
;;;;;;
;;;;;;
;;;;;;
We only have one version of our server running at any time. So how would we time our releases? What happens in the event that we release a new version of our API and our mobile app but the app store does not yet have the latest version publicly available. Then the user will be forced to do an update but the updated app is not yet available to them.
;;;;;;
"How do we maintain the API version number? On the mobile app we can just configure that. But on the server it is not great to have to maintain a version number. The reason I say this is what if we make a change to a method signature or DTO, etc, and forget to update this version number manually before releasing? Surely there is a more automatic way to do this where some unique """"API key"""" is generated based on the current definition of the API? We could then use this instead of an API version number.
";;;;;;
;;;;;;
"
32274486"",""No"",""Sounds like you need to make backward-compatible updates to your API.
";;;;;;
;;;;;;
Since you're in control of the client code calling the API on the mobile side, just code your app to ignore new fields that appear in the JSON responses. That will make the app far less brittle and allow you to expand your objects at will. Make the most of HATEOAS and have your clients navigate the hyperlinks within your objects rather than hardcode them to your URL structure.
;;;;;;
;;;;;;
"You should start to build a culture and process of compatibility testing with each server release, so that you can verify automatically that your older API clients (which of course will live forever on the phones of people who never update their apps) will still work with the update you're planning for your server. In Semantic Versioning, this is akin to making a minor version upgrade to your API.
";;;;;;
;;;;;;
"If you believe you'll at some stage need to make a vastly incompatible API change that would break your older apps, then build in a """"compatibility check"""" into your API clients from the beginning. Upon startup, they should check a simple API on the server to do a basic version handshake. If the server responds with a """"we simply can't support your old client code anymore"""", then have your app error out with a message that tells the user to pull the latest version from the app store. But since that's a pretty nasty user experience, it's better to just build in sensible compatibility from the get-go.
";;;;;;
"
32274486"",""No"",""There are a few things you can do.
";;;;;;
;;;;;;
;;;;;;
- Architect API versioning in from the beginning. There are 2 common approaches I have seen for this with REST API's: putting a URL prefix like
/v1, /v2, etc. before all REST resource end points or using the HTTP Accepts header to negotiate versions. There are religious wars on which one is right. It's your API. Do what you think is right. ;;;;;;
- Abstract out business logic from API endpoint code within your source code. This way, you can have a v1 and v2 endpoint which re-use common code at a lower-level service layer. This is something you don't need to do from the get-go. You can wait until v2 of the API to start separating things out.
;;;;;;
- Automated testing of each build against existing API versions (and whatever new version you are building, but regression testing is the key point I am making).
;;;;;;
- Forcing app updates, or at least tracking usage by app version, can allow you to remove/cleanup any code supporting legacy versions.
;;;;;;
;;;;;;
;;;;;;
"I am working on a similar project, creating a new REST API for a new mobile app. I am partitioning the URL space by version, so https://api.blahblahblah/v1.0/resource.
";;;;;;
;;;;;;
For the moment, I have my business logic built right into the code that accepts the HTTP requests because there is no other use for such logic. However, when I need to make a new version, I will refactor the v1 API code to separate anything not v1-specific into a more re-usable module which can then be re-used.
;;;;;;
;;;;;;
Depending on how structurally different your versions are, you may need some redundancies to keep your API's separated. For example, maybe you need a general UserEntity object to represent information about a user from your database, but then need a UserV1Resource and UserV2Resource object for the separate versions with adapters or some other design pattern to mediate the different types which get serialized to JSON or XML.
;;;;;;
;;;;;;
By having automated API tests, I am free to do any of that refactoring as I need, separating as I go, knowing that the moment I break any backward compatibility, my tests will scream at me.
;;;;;;
;;;;;;
A nice benefit of the API only being consumed by our mobile app for the time being is that we only need to worry about compatibility with the supported app versions. If we can make sure our end users are updating their app regularly, we'll be able to remove older versions, which helps minimize our technical debt.
;;;;;;
"
32274486"",""No"",""""";;;;;;
"32274486,""No"",""REST API versioning""";;;;;;
"38995643,""No"",""we have a RESTful service deployed on multiple nodes and we want to limit the number of calls coming to our service from each client with different quota for each client per minute.";;;;;;
our stack : Jboss application server, Java/Spring RESTful service.
;;;;;;
;;;;;;
What cloud be the possible technique to implement this?
;;;;;;
"
38995643"",""No"",""If the only way to access your API is through a UI client which you manages , then you can add a check on the client code (javascript in case of web app) to make a call only when the limit is not crossed by that user. Else there is no way, since a user can always access your API and the only thing at the server level which you can do is to check whether to send an error or valid result as a part of API response.
";;;;;;
"
38995643"",""No"",""Sometimes ago I read a good article where the same theme was highlighted. ";;;;;;
The idea is to move this logic into load balancing proxy and here some good reasons to do it:
;;;;;;
;;;;;;
;;;;;;
Eliminates technical debt - If you’ve got rate limiting logic coupled in with app logic, you’ve got technical debt you don’t need. You can lift and shift that debt
;;;;;;
Efficiency gains - You’re offloading logic upstream, which means all your compute resources are dedicated to compute. You can better predict
;;;;;;
Security - It’s well understood that application layer (request-response) attacks are on the rise, including denial of service. By leveraging an upstream proxy with greater capacity for connections you can stop those attacks in their tracks, because they never get anywhere near the actual server.
;;;;;;
;;;;;;
"
38995643"",""No"",""To limit the stack, it means you need to keep state, at least based on some specific client identification. This may require you to maintain a central counter e.g. db (cassandra) which can allow you to look up the current request count per minute, and then within a java filter, you can restrict request counts as necessary.
";;;;;;
;;;;;;
Or if you can track the client's session, then you can track and then use sticky session, enforcing clients to use specific node for the duration of the client session, and hence you can simply track within a java filter, the number of requests per client, and send 503 code or something more relevant.
;;;;;;
"
38995643"",""No"",""""";;;;;;
"38995643,""No"",""Limit number of calls to RESTful service""";;;;;;
"41955435,""No"",""I have a Rest Api Project which is using a database of around 25 to 30 tables. This project was built using JDBC Prepared statements.";;;;;;
The project is huge. Since I got to know hibernate orm is better for maintenance I thought I should migrate to Hibernate ORM. I have a intermediate experience in Hibernate. After I started working I had to create POJO classes which are different from my previous pojo classes, because hibernate annotation uses bean classes with mapping for other tables as well. Its getting messed up everywhere changing everything.Is it worth migrating to Hinernate ORM after my project is 90% done?
;;;;;;
;;;;;;
;;;;;;
- My Business objects are changing.
;;;;;;
- Dao is different from previous ones.
;;;;;;
- Controller Also needs modificatons.
;;;;;;
;;;;;;
"
41955435"",""No"",""Given that you are 90% complete, it certainly isn't something I would rush to do immediately. I would continue to maintain your current implementation strategy at least for now.
";;;;;;
;;;;;;
But this is precisely one of the pitfalls developers fall into when they elect to reuse the same model classes at various layers in their application. When you want to introduce a new piece of technology or make a radical change at a lower level, all layers that sit atop of that are affected. This leads to serious technical debt downstream that should be avoided.
;;;;;;
;;;;;;
Simple, prototype based applications can certainly get away with this type of code reuse, but more sophisticated, complex applications should not for the reasons I stated above.
;;;;;;
;;;;;;
What you could look to do is refactor the code so that you have cleaner and more clearer boundaries between the various layers of the application. The ideal scenario is something like:
;;;;;;
;;;;;;
;;;;;;
- Persistence models (these are your
@Entity classes) ;;;;;;
- Domain models (these are what your services take as input and return as output)
;;;;;;
- View models (these are what your controllers take as input and return as output)
;;;;;;
;;;;;;
;;;;;;
Each layer would then contain some amount of mapping code which knows how to take one model type to the next, something like this:
;;;;;;
;;;;;;
;;;;;;
- Controller takes a view-model and maps it to a domain model
;;;;;;
- Controller calls service with your domain model
;;;;;;
- Service calls a repository with the domain model
;;;;;;
- Repository takes a domain model and maps it to a persistence model
;;;;;;
- Repository calls Hibernate with the persistence model
;;;;;;
;;;;;;
;;;;;;
Many may view this as unnecessary abstraction and as I pointed out, in simple and basic use cases, that's true. But the benefit here is that you avoid unnecessary cohesion between layers when you start to separate them like this.
;;;;;;
;;;;;;
At a minimum, it's worth a split between view and persistence models. This allows you to model the structure of your datastore where it makes the most sense while allowing the option for a completely different exposed REST interface. This way as requirements change on either end of the spectrum, they are free to do so with only having to address the mapping code that sits between them.
;;;;;;
"
41955435"",""No"",""""";;;;;;
"41955435,""No"",""Worth moving to Hibernate ORM from JDBC Prepared statements after the project is 90% done?""";;;;;;
"44232487,""No"",""I've been working on a personal project which, beyond just making something useful for myself, I've tried to use as a way to continue finding and learning architectural lessons. One such lesson has appeared like a Kodiak bear in the middle of a bike path and I've been struggling quite mightily with it.
";;;;;;
;;;;;;
The problem is essentially an amalgam of issues at the intersection of dependency injection, assembly decoupling and implementation hiding (that is, implementing my public interfaces using internal classes).
;;;;;;
;;;;;;
At my jobs, I've typically found that various layers of an application hold their own interfaces which they publicly expose, but internally implement. Each assembly's DI code registers the internal class to the public interface. This technique prevents outside assemblies from newing-up an instance of the implementation class. However, some books I've been reading while building this solution have spoken against this. The main things that conflict with my previous thinking have to do with the DI composition root and where one should keep the interfaces for a given implementation. If I move dependency registration to a single, global composition root (as Mark Seemann suggests), then I can get away from each assembly having to run its own dependency registrations. However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them). As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it. As an example, here is a diagram he provided, and, for contrast, a diagram for how I would normally implement the same solution (okay, these aren't quite the same; kindly focus on the arrows and notice when implementation arrows cross assembly boundaries instead of composition arrows).
;;;;;
;;;;;;
Martin Style
;;;;;;
;;;;;;
"![]()
";;;;;;
;;;;;;
What I've normally seen
;;;;;;
;;;;;;
"![]()
";;;;;;
;;;;;;
"I immediately saw the advantage in Martin's diagram, that it allows the lower assemblies to be swapped out for another, given that it has a class that implements the interface in the layer above it. However, I also saw this seemingly major disadvantage: If you want to swap out the assembly from an upper layer, you essentially """"steal"""" the interface away that the lower layer is implementing.
";;;;;;
;;;;;;
After thinking about it for a little bit, I decided the best way to be fully decoupled in both directions would be to have the interfaces that specify the contract between layers in their own assemblies. Consider this updated diagram:
;;;;;;
;;;;;;
"![]()
";;;;;;
;;;;;;
Is this nutty? Is it right on? To me, it seems like this solves the problem of interface segregation. It doesn't, however, solve the problem of not being able to hide the implementation class as internal. Is there anything reasonable that can be done there? Should I not be worried about this?
;;;;;;
;;;;;;
One solution that I'm toying around with in my head is to have each layer implement the proxy layer's interface twice; once with a public class and once with an internal class. This way, the public class could merely wrap/decorate the internal class, like this:
;;;;;
;;;;;;
"![]()
";;;;;;
;;;;;;
Some code might look like this:
;;;;;;
;;;;;;
namespace MechanismProxy // Simulates Mechanism Proxy Assembly;;;;;;
{;;;;;;
public interface IMechanism;;;;;;
{;;;;;;
void DoStuff();;;;;;
};;;;;;
};;;;;;
;;;;;;
namespace MechanismImpl // Simulates Mechanism Assembly;;;;;;
{;;;;;;
using MechanismProxy;;;;;;
;;;;;;
// This class would be registered to IMechanism in the DI container;;;;;;
public class Mechanism : IMechanism;;;;;;
{;;;;;;
private readonly IMechanism _internalMechanism = new InternalMechanism();;;;;;
;;;;;;
public void DoStuff();;;;;;
{;;;;;;
_internalMechanism.DoStuff();;;;;;
};;;;;;
};;;;;;
;;;;;;
internal class InternalMechanism : IMechanism;;;;;;
{;;;;;;
public void DoStuff();;;;;;
{;;;;;;
// Do whatever;;;;;;
};;;;;;
};;;;;;
};;;;;;
;;;;;;
;;;;;;
... of course, I'd still have to address some issues regarding constructor injection and passing the dependencies injected into the public class to the internal one. There's also the problem that outside assemblies could possibly new-up the public Mechanism... I would need a way to ensure only the DI container can do that... I suppose if I could figure that out, I wouldn't even need the internal version. Anyway, if anyone can help me understand how to overcome these architectural problems, it would be mightily appreciated.
;;;;;;
"
44232487"",""No"",""This is a somewhat opinion based topic, but since you asked, I'll give mine.
";;;;;;
;;;;;;
Your focus on creating as many assemblies as possible to be as flexible as possible is very theoretical and you have to weigh the practical value vs costs. ;;;;;;
Don't forget that assemblies are only a container for compiled code. They become mostly relevant only when you look at the processes for developing, building and deploying/delivering them. So you have to ask a lot more questions before you can make a good decision on how exactly to split up the code into assemblies.
;;;;;;
;;;;;;
;;;;;;
So here a few examples of questions I'd ask beforehand:
;;;;;;
;;;;;;
Does it make sense from your application domain to split up the;;;;;;
assemblies in this way (e.g. will you really need to swap out;;;;;;
assemblies)?
;;;;;;
;;;;;;
Will you have separate teams in place for developing those?
;;;;;;
;;;;;;
What will the scope be in terms of size (both LOC and team sizes)?
;;;;;;
;;;;;;
Is it required to protect the implementation from being available/visible? E.g. are those external interfaces or internal?
;;;;;;
;;;;;;
Do you really need to rely on assemblies as a mechanism to enforce your architectural separation? Or are there other, better measures (e.g. code reviews, code checkers, etc.)?
;;;;;;
;;;;;;
Will your calls really only happen between assemblies or will you need remote calls at some point?
;;;;;;
;;;;;;
Do you have to use private assemblies?
;;;;;;
;;;;;;
Will sealed classes help instead enforcing your architecture?
;;;;;;
;;;;;;
;;;;;;
For a very general view, leaving these additional factors out, I would side with Martin Fowler's diagram, because that is just the standard way of how to provide/use interfaces. If your answers to the questions indicate additional value by further splitting up/protecting the code that may be fine, too. But you'd have to tell us more about your application domain and you'd have to be able to justify it well.
;;;;;;
;;;;;;
So in a way you are confronted with two old wisdoms:
;;;;;;
;;;;;;
;;;;;;
- Architecture tends to follow organizational setups.
;;;;;;
- It is very easy to over-engineer (over-complicate) architectures but it is very hard to design them as simple as possible. Simple is most of the time better.
;;;;;;
;;;;;;
;;;;;;
When coming up with an architecture you want to consider those factors upfront otherwise they'll come and haunt you later in the form of technical debt.
;;;;;;
"
44232487"",""No"",""";;;;;;
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
;;;;;;
;;;;;;
;;;;;;
That doesn't sound like a downside. Implementation classes, that are bound to abstractions in your Composition Root, could probably be used in an explicit way somewhere else for some other reasons. I don't see any benefit from hiding them.
;;;;;;
;;;;;;
;;;;;;
I would need a way to ensure only the DI container can do that...
;;;;;;
;;;;;;
;;;;;;
No, you don't.
;;;;;;
;;;;;;
Your confusion probably stems from the fact that you think of the DI and Composition Root like there is a container behind, for sure.
;;;;;;
;;;;;;
"In fact, however, the infrastructure could be completely """"container-agnostic"""" in a sense that you still have your dependencies injected but you don't think of """"how"""". A Composition Root that uses a container is your choice, as good choice as possible another Composition Root where you manually compose dependencies. In other words, the Composition Root could be the only place in your code that is aware of a DI container, if any is used. Your code is built agaist the idea of Dependency Inversion, not the idea of Dependency Inversion container.
";;;;;;
;;;;;;
A short tutorial of mine can possibly shed some light here
;;;;;;
;;;;;;
"http://www.wiktorzychla.com/2016/01/di-factories-and-composition-root.html
";;;;;;
"
44232487"",""No"",""";;;;;;
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
;;;;;;
;;;;;;
;;;;;;
Unless you are building a reusable library (that gets published on NuGet and gets used by other code bases you have no control over), there is typically no reason to make classes internal. Especially since you program to interfaces, the only place in the application that depends on those classes is the Composition Root.
;;;;;;
;;;;;;
Also note that, in case you move the abstractions to a different library, and let both the consuming and the implementing assembly depend on that assembly, those assemblies don't have to depend on each other. This means that it doesn't matter at all whether those classes are public or internal.
;;;;;;
;;;;;;
This level of separation (placing the interfaces in an assembly of its own) however is hardly ever needed. In the end it's all about the required granularity during deployment and the size of the application.
;;;;;;
;;;;;;
;;;;;;
As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it.
;;;;;;
;;;;;;
;;;;;;
"This is the Dependency Inversion Principle, which states:
";;;;;;
;;;;;;
;;;;;;
In a direct application of dependency inversion, the abstracts are owned by the upper/policy layers
;;;;;;
;;;;;;
"
44232487"",""No"",""<.net>""";;;;;;
"44232487,""No"",""Architecture: Dependency Injection, Loosely Coupled Assemblies, Implementation Hiding""";;;;;;