Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense "1247","2","","1106","2010-09-07 20:31:35","","2","","

I think you can call yourself good at something when you develop the required knowledge to be able to look at yourself as if from another person's perspective, and then determine if you are good. In other words, you should have strong ""meta"" skills.

For example, I'm a hobbyist writer, and when looking at my own writing I can tell:

etc. etc. Because of this I feel that I'm qualified to decide if I'm good at writing or not. The same applies to X.

","92","","","","","2010-09-07 20:31:35","","","","0","","","2010-09-07 20:31:35","CC BY-SA 2.5" "210701","2","","210668","2013-09-06 16:19:54","","78","","

I'm surprised that everybody thinks this is such a good thing. The authors of Peopleware (which, IMO, is still one of the precious few software project management books actually worth reading) strongly disagree. Almost the entire Part IV of the book is dedicated to this very issue.

The software team is an incredibly important functional unit. Teams need to jell to become really productive. It takes time (a lot of time) for team members to earn each others' respect, to learn each others' habits and quirks and strengths and weaknesses.

Certainly, from personal experience, I can say that after a year of working with certain people, I've learned to laugh off certain things that used to rile me up, my estimates as team lead are much better, and it's not too difficult to get the work distributed so as to make everyone happy. It wasn't like that in the beginning.

Now you might say, ""Oh, but we're not breaking up the whole team, just moving a few people."" But consider (a) how blindly unproductive their replacements are going to be in the beginning, and (b) how many times you'll find yourself or other teams saying, without even thinking, ""I really liked X"" or ""This would have been easier with Y still around"", subtly and unconsciously offending the new members and creating schisms within the existing team, even sowing discontent among the ""old"" members.

People don't do this on purpose, of course, but it happens almost every time. People do it without thinking. And if they force themselves not to, they end up focusing on the issue even more, and are frustrated by the forced silence. Teams and even sub-teams will develop synergies that get lost when you screw around with the structure. The Peopleware authors call it a form of ""teamicide"".

That being said, even though rotating team members is a horrible practice, rotating teams themselves is perfectly fine. Although well-run software companies should have some concept of product ownership, it's not nearly as disruptive to a team to move that entire team to a different project, as long as the team actually gets to finish the old project or at least bring it to a level they're happy with.

By having team stints instead of developer stints, you get all the same benefits you would expect to get with rotating developers (documentation, ""cross-pollination"", etc.) without any of the nasty side-effects on each team as a unit. To those who don't really understand management, it may seem less productive, but rest assured that the productivity lost by splitting up the team totally dwarfs the productivity lost by moving that team to a different project.

P.S. In your footnote you mention that the tech lead might be the only person not to be rotated. This is pretty much guaranteed to mess up both teams. The tech lead is a leader, not a manager, he or she has to earn the respect of the team, and is not simply granted authority by higher levels of management. Putting an entire team under the direction of a new lead whom they've never worked with and who is very likely to have different ideas about things like architecture, usability, code organization, estimation... well, it's going to be stressful as hell for the lead trying to build credibility and very unproductive for the team members who start to lose cohesion in the absence of their old lead. Sometimes companies have to do this, i.e. if the lead quits or gets promoted, but doing it by choice sounds insane.

","3249","","39006","","2013-12-13 22:16:55","2013-12-13 22:16:55","","","","11","","","","CC BY-SA 3.0" "106828","2","","106795","2011-09-09 01:46:44","","5","","

My experience working with extremely complex business logic and a domain expert is that the time of the domain expert tends to be extremely valuable, even more so in a small sized company.

He is of course rattling off endless details and nuances to you because it comes naturally to him for one, and secondly because he is likely an extremely busy person. These types of people don't like to have to repeat themselves.

I know this sounds strange but get a decent digital voice recorder like the kind a journalist might carry around with them. Have a sit down and just let him brain dump. While he is talking take notes but only write down the Main Points.

When you are done, tackle a single point at a time and replay the audio sections to recap all of the details that you missed. If you are not an aural person you can dictate the conversation into text or if your office has a secretary or administrative assistant that you are allowed to utilize then ask her to copy the conversation into a textual document for you.

This is the best way in my opinion and you will find that the domain expert will be much more clear and descriptive knowing he/she is being recorded, just make sure he/she is comfortable with this before you do. Further the weekends play havoc on your short term memory, you will come in on Monday and forget critical pieces of information forcing you to bug the domain expert with nagging questions that he has already been over.

Only then when you have the raw information can you formulate use cases and user stories.

","25476","","","","","2011-09-09 01:46:44","","","","2","","","","CC BY-SA 3.0" "106966","1","","","2011-09-09 13:39:12","","22","14937","

In the more traditional projects that I've worked on, the project manager (and, on larger projects, there might be associate/deputy/assistant project managers should one person be unavailable) is the person responsible for communicating with the customer, receiving project health and status updates, determining scheduling and budgeting, managing the process, ensuring the team has what they need to complete tasks, and so on.

In Scrum, however, these responsibilities are split between the Product Owner and the ScrumMaster. The Product Owner is the voice of the customer. They interact directly with the customer, create user stories, organize and prioritize the product backlog, and other user/customer facing issues. The ScrumMaster handles the process, overseeing meetings (including estimation and planning), removing impediments, and monitoring the overall health of the project, making adjustments as needed.

I've read in multiple sources, including Wikipedia, that the role of ScrumMaster and Product Owner should be held by two different people. I've not only read about, but worked on successful ""traditional"" style projects where the activities of both were handled by a single individual. In fact, it makes more sense for one to three people to be responsible for handling project (including human resources/staffing) and process level tasks, as they often go hand-in-hand. Process changes have an impact on scheduling, budgeting, quality, and other project-level goals, and project changes have an impact on process.

Why does Scrum call for isolating these activities into two roles? What advantages does this actually provide? Has anyone been on a successful Scrum project where the Product Owner and ScrumMaster were the same individual?

","4","","","","","2011-09-09 17:30:26","In Scrum, why shouldn't the Product Owner and ScrumMaster roles be combined?","","5","8","4","","","CC BY-SA 3.0" "106967","2","","106966","2011-09-09 13:44:46","","4","","

I am no expert, but I think the Scrum Master should be the team advocate/facilitator. The voice of the customer should have the customer's interests at heart. The Scrum Master should be all about helping the team get what they need to have a successful sprint.

","23632","","","","","2011-09-09 13:44:46","","","","0","","","","CC BY-SA 3.0" "107087","2","","106980","2008-09-21 14:17:40","","7","","

Apart from the hard stuff like offices, tools, gear, food and snack I'd like to add something that makes me feel special:

Let your developers in on decisions!
If you're getting new tools for them, or moving or starting a new project or even hiring new people -let your developers in on those decisions. It's only fair you get a say in who your new coworker is or what the next big thing you are going to work for a few years on.

One way to do this is to conduct meetings in a round table fashion where you specifically ask every attending person for their opinion, not just let them speak up if they wish.

","","Niklas Winde","","","","2008-09-21 14:17:40","","","","1","","","2011-09-09 13:40:37","CC BY-SA 2.5" "211132","1","","","2013-09-11 13:08:16","","2","2443","

I am about to start User story sessions in my team. It's quite new for them and I am also wrestling with certain things. For the current project we have some well worked out wireframes.

I read a lot about the way of writing user stories. What the template should be like and about different aids like Invest

The plan is to turn them around in user stories. Lets say I have a screen where a user could edit an order. There is a lot of detail on that screen. Now when creating a user story of this story. Will it suffice to say:

As an Admin I can edit a purchase order so that mistakes typed by the user can be enhanced.

Or should i specify each detail like:

As an Admin I can resend an invoice to the customer, so he can get a copy of his lost one.

As an Admin I can review the customer order so he has detailed information about each purchase

As an admin I can remove items forma on order so that in case the customer made a mistake I can remove Items.


And how about the acceptance criteria. How should the be defined for a user story as such. Where do I define which fields need to be shown on an order detail page? Can this be part of the acceptance criteria?

","6632","","-1","","2020-06-16 10:01:49","2013-09-14 21:58:52","Level of detail of a user story","","2","0","1","","","CC BY-SA 3.0" "107523","1","","","2011-09-12 05:04:47","","5","359","

I work for a large government department as part of an IT team that manages and develops websites as well as stand alone web applications.

We’re running in to problems somewhere in the SDLC that don’t rear their ugly head until time and budget are starting to run out.

We try to be “Agile” (software specifications are not as thorough as possible, clients have direct access to the developers any time they want) and we are also in a reasonably peculiar position in that we are not allowed to make profit from the services we provide. We only service the divisions within our government department, and can only charge for the time and effort we actually put in to a project. So if we deliver a project that we have over-quoted on, we will only invoice for the actual time spent.

Our software specifications are not as thorough as they could be, but they always include at a minimum:

  • Wireframe mockups for every form view
  • A data dictionary of all field inputs
  • Descriptions of any business rules that affect the system
  • Descriptions of the outputs

I’m new to software management, but I’ve overseen enough software projects now to know that as soon as users start observing demos of the system, they start making a huge amount of requests like “Can we add a few more fields to this report.. can we redesign the look of this interface.. can we send an email at this part of the workflow.. can we take this button off this view.. can we make this function redirect to a different screen.. can we change some text on this screen… can we create a special account where someone can log in and get access to X… this report takes too long to run can it be optimised.. can we remove this step in the workflow… there’s got to be a better image we can put here…” etc etc etc.

Some changes are tiny and can be implemented reasonably quickly.. but there could be up to 50-100 or so of such requests during the course of the SDLC. Other change requests are what clients claim they “just assumed would be part of the system” even if not explicitly spelled out in the spec.

We are having a lot of difficulty managing this process. With no experienced software project managers in our team, we need to come up with a better way to both internally identify whether work being requested is “out of spec”, and be able to communicate this to a client in such a manner that they can understand why what they are asking for is “extra” work.

We need a way to track this work and be transparent with it.

In the spirit of Agile development where we are not spec'ing software systems in to the ground and back again before development begins, and bearing in mind that clients have access to any developer any time they want it, I am looking for some tips and pointers from experienced software project managers on how to handle this sort of ""scope creep"" problem, in tracking it, being transparent with it, and communicating it to clients such that they understand it.

Happy to clarify anything as needed.

I really appreciate anyone who takes the time to offer some advice.

Thanks.

","36474","","36474","","2011-09-12 05:57:54","2011-11-21 21:43:34","Managing software projects - advice needed","","5","0","5","","","CC BY-SA 3.0" "211465","2","","208271","2013-09-14 01:49:52","","65","","

There is no rule, either in the W3C spec or the unofficial rules of REST, that says that a PUT must use the same schema/model as its corresponding GET.

It's nice if they're similar, but it's not unusual for PUT to do things slightly differently. For example, I've seen a lot of APIs that include some kind of ID in the content returned by a GET, for convenience. But with a PUT, that ID is determined exclusively by the URI and has no meaning in the content. Any ID found in the body will be silently ignored.

REST and the web in general is heavily tied to the Robustness Principle: ""Be conservative in what you do [send], be liberal in what you accept."" If you agree philosophically with this, then the solution is obvious: Ignore any invalid data in PUT requests. That applies to both immutable data, as in your example, and actual nonsense, e.g. unknown fields.

PATCH is potentially another option, but you shouldn't implement PATCH unless you're actually going to support partial updates. PATCH means only update the specific attributes I include in the content; it does not mean replace the entire entity but exclude some specific fields. What you're actually talking about is not really a partial update, it's a full update, idempotent and all, it's just that part of the resource is read-only.

A nice thing to do if you choose this option would be to send back a 200 (OK) with the actual updated entity in the response, so that clients can clearly see that the read-only fields were not updated.

There are certainly some people who think the other way - that it should be an error to attempt to update a read-only portion of a resource. There is some justification for this, primarily on the basis that you would definitely return an error if the entire resource was read-only and the user tried to update it. It definitely goes against the robustness principle, but you might consider it to be more ""self-documenting"" for users of your API.

There are two conventions for this, both of which correspond to your original ideas, but I'll expand on them. The first is to prohibit the read-only fields from appearing in the content, and return an HTTP 400 (Bad Request) if they do. APIs of this sort should also return an HTTP 400 if there are any other unrecognized/unusable fields. The second is to require the read-only fields to be identical to the current content, and return a 409 (Conflict) if the values do not match.

I really dislike the equality check with 409 because it invariably requires the client to do a GET in order to retrieve the current data before being able to do a PUT. That's just not nice and is probably going to lead to poor performance, for somebody, somewhere. I also really don't like 403 (Forbidden) for this as it implies that the entire resource is protected, not just a part of it. So my opinion is, if you absolutely must validate instead of following the robustness principle, validate all of your requests and return a 400 for any that have extra or non-writable fields.

Make sure your 400/409/whatever includes information about what the specific problem is and how to fix it.

Both of these approaches are valid, but I prefer the former one in keeping with the robustness principle. If you've ever experienced working with a large REST API, you'll appreciate the value of backward compatibility. If you ever decide to remove an existing field or make it read-only, it is a backward compatible change if the server just ignores those fields, and old clients will still work. However, if you do strict validation on the content, it is not backward compatible anymore, and old clients will cease to work. The former generally means less work for both the maintainer of an API and its clients.

","3249","","-1","","2017-05-23 11:33:36","2013-09-14 01:49:52","","","","4","","","","CC BY-SA 3.0" "4918","2","","4879","2010-09-17 19:47:04","","1","","

Getting as much of the crap out of the developers way is the way to go. Anything that makes me think of anything other than the task in hand is a waste of my clock cycles.

Unfortunately, what this actually is will differ from person to person. Some people touch the mouse as little as possible. Other people hate to have to remember keyboard shortcuts. Some people want silence, others like the hum of a working office.

Anything mechanical or repetitive should be automated. Code formatting tools, version control commit-hooks for lint checking, cruise control, etc, etc are good and fairly generally applicable.

Aside from that, give developers the choice to make their own decisions on what works for them. Some decisions have to be made at a department / company level (code style, build system, possibly even IDE, depending on your level of integration), but everything else should be left to the person who has to get that small amount of code into the file.

","2238","","","","","2010-09-17 19:47:04","","","","1","","","","CC BY-SA 2.5" "5026","2","","2410","2010-09-18 00:46:47","","6","","

I love Chris McMahon's analogy of Software development being like the creation of music, particularly jazz music.

This is Ella Fitzgerald and Count Basie doing the song One O'Clock Jump. The song is a twelve-bar blues, which is the jazz equivalent of a database app with a UI. By which I mean: just as every programmer has built a database app with a UI, every American musician has played twelve-bar blues. It is a framework on which many many many songs are hung, from Count Basie to Jimi Hendrix to the Ramones.

This particular video is a great example of agile practice. Listen to how the voice and piano influence each other. This is a lot like pair programming, and it's a lot like TDD: voice does something; piano responds; piano does something; voice responds. And notice the eye contact. These people are intensely aware of what's going on instant-to-instant. They have no sheet music (BDUF). They are involved in an activity that takes intense concentration and skill, just like good software development. They are also clearly aware that there is an audience, just as good software development should be aware of the needs of the people paying the bills.

Here's the link to the blog post in which he discusses it: http://chrismcmahonsblog.blogspot.com/2007/05/example-of-analogy-monks-vs-music.html

","38","","","","","2010-09-18 00:46:47","","","","1","","","2011-03-03 17:06:14","CC BY-SA 2.5" "108004","2","","107917","2011-09-13 19:37:36","","3","","

Take as long as you need so that you select enough that your team thinks they can reasonably achieve in the sprint. But you should be spending time during the (previous) sprint refining the backlog: estimating and refining stories.

From the Scrum Primer (PDF):

Product Backlog Refinement

One of the lesser known, but valuable, guidelines in Scrum is that five or ten percent of each Sprint must be dedicated by the Team to refining (or “grooming”) the Product Backlog. This includes detailed requirements analysis, splitting large items into smaller ones, estimation of new items, and re-estimation of existing items. Scrum is silent on how this work is done, but a frequently used technique is a focused workshop near the end of the Sprint, so that the Team and Product Owner can dedicate themselves to this work without interruption. For a two-week Sprint, five percent of the duration implies that each Sprint there is a half-day Product Backlog Refinement workshop. This refinement activity is not for items selected for the current Sprint; it is for items for the future, most likely in the next one or two Sprints. With this practice, Sprint Planning becomes relatively simple because the Product Owner and Scrum Team start the planning with a clear, well-analyzed and carefully estimated set of items. A sign that this refinement workshop is not being done (or not being done well) is that Sprint Planning involves significant questions, discovery, or confusion and feels incomplete; planning work then often spills over into the Sprint itself, which is typically not desirable.

Doing this means you can focus on planning during planning, and it doesn't take all day and the team starts to lose focus and get bored.

","25708","","25708","","2016-06-21 07:29:28","2016-06-21 07:29:28","","","","1","","","","CC BY-SA 3.0" "108268","2","","108257","2011-09-14 15:11:31","","6","","

A user story is typically created from a need expressed by the client or potential user of the system. It's often of the format ""As a {role}, I want {goal} so that {benefits}"". The collective set of user stories capture the functionality desired in the system that is being built. The customer or customer representative prioritizes each user story, typically based on the value added by having the functionality specified in the story.

Once written, user stories are sized and estimated. There are a number of techniques to do this. The most common method of estimation that I've seen is the amount of effort needed to complete the task, in arbitrary values. There's a base unit that everyone can agree on, and use this as a common framework for providing estimates as to the effort required. I've seen these as being unitless values called ""story points"", but I don't see why you couldn't also estimate the user story in hours. The key is to be consistent across all user stories.

For the first iteration, the team estimates how many story points they can complete in a given iteration and move up to that number of points into the backlog for an iteration. If you are estimating in hours, then you can determine how many hours your development team will dedicate to the project during the iteration and pull down that many hours worth of work. After the iteration, you determine how many points or hours that you actually completed and pull down that amount of work for the next iteration.

During the entire process, your overall backlog of stories is changing. Features might be removed, new features can be added, or the priority can be changed. However, none of this affects the work that was pulled down for the current iteration. Only between iterations should you adjust what you are working on. You will typically either have an on-site customer representative or someone who can act as voice of the customer and is in contact with the appropriate people from the customer's organization. They are continually refining the requirements and acceptance criteria throughout the project.

How you further break down user stories into tasks is up to you. It might be an undocumented preference of the engineer, or there might be a detailed analysis of exactly what each user story entails. That's something that needs to be specified by tailoring the process to meet the needs of your organization, team, and project.

You should have a definition of done, which can be used to determine when a particular user story is shippable. This defines everything from design, implementation, testing, quality assurance, acceptance criteria, and documentation. You can specify what tools and methods you use to ensure that a given feature as specified by a user story is done. Once a user story is done and integrated, the product should be in a potentially-shippable state, meaning that packaging it and delivering it to the customer would add some value to their operations or meet some of their needs.

Ultimately, you need to tailor the processes to work for your organization, team, and project. Doing anything ""by the book"" is usually a recipe for problems. Just because something has been documented and works well for certain teams working on certain projects doesn't mean that it fits everything that you need it to do.

You might be interested in this InfoQ article on user story estimation as well as Scott Ambler's Introduction to User Stories.

","4","","4","","2019-08-05 00:54:45","2019-08-05 00:54:45","","","","0","","","","CC BY-SA 4.0" "212223","2","","86006","2013-09-22 19:50:39","","3","","

Is it because these were all written in managed, garbage-collected languages rather than native code?

No. Slow code will perform poorly regardless. Sure, a particular language may introduce certain classes of problems while solving others. But good programmers are quite capable of finding workarounds given enough time.

Is it the individual programmers who wrote the software for these devices?

Partly. In many cases it is quite likely at least a contributing factor. This is an unfortunate side-effect of an industry where good programmers are in high demand and short supply. Also the gulfs between various levels of technical ability can be quite large. So it stands to reason that sometimes the programmers tasked to implement certain software could be congratulated just for getting it to work (sort of).

In all of these cases the app developers knew exactly what hardware platform they were targeting and what its capabilities were; did they not take it into account?

Partly. For a start, the exact hardware platform is probably not known, as that is often negotiated with various manufacturers in parallel during software development. In fact, there can even be small (but not necessarily insignificant) changes to underlying hardware after initial release. However, I would agree that the general capabilities will be known.

Part of the problem is that software probably isn't developed on the hardware, it's done in emulators. This makes it difficult to account for true device performance even if the emulators are 100% accurate - which they aren't.

Of course this doesn't really justify insufficient testing on the appropriate prototype hardware before release. That blame probably lies outside of dev/qa control.

Is it the guy who goes around repeating "optimization is the root of all evil," did he lead them astray?

No. I'm pretty certain they don't listen to him anyway; otherwise he wouldn't be misquoted so often (that's supposed to be "premature optimisation ..."). :-D

It's more likely that too many programmers take one of 2 extremes with regards optimisation.

  • Either they either ignore it completely.
  • Or they obsess themselves with minutiae that has nothing to do with the actual performance requirements. The net effect being that budget runs out and the code is too obfuscated to optimise the real performance problems safely.

Was it a mentality of "oh it's just an additional 100ms" each time until all those milliseconds add up to minutes?

Possibly. Obviously if Sleep(100) has been used as the equivalent of tissue paper used to slow the bleeding of a severed limb - then problems are to be expected. However, I suspect the problem is more subtle than that.

The thing is modern computing hardware (including embedded devices) is much faster than people give them credit for. Most people, even "experienced" programmers fail to appreciate just how fast computers are. 100ms is a long time - a very long time. And as it so happens, this "very long time" cuts 2 ways:

  • The first is that programmers worry unnecessarily about the things that a computer does extremely quickly. (It so happens that it was just such a concern "incrementing a value 300 times per second" that led me here in the first place.)
  • The second is that they sometimes fail to show due concern when things do take a very long time (on the computing timescale). So:
    • if they ignore the effects of latency when communicating over a network or with a storage device;
    • if they ignore the impact of a thread blocked and waiting for another thread;
    • if they forget that because computers work so quickly it is very capable of repeating a task far more often than it should, without the developer being aware of a problem
    • ... if any combination of such oversights occur, a routine will unexpectedly run very slowly (on the computing timescale). A few repeats and it will even be noticeable by humans - but may be tricky to pin down because hundreds of interconnected things are all running quickly by themselves.

Is it my fault, for having bought these products in the first place?

Yes definitely. Well, not you personally but consumers in general. Products are sold (and bought) by feature checklists. Too few consumers are demanding better performance.

To illustrate my point: The last time I wanted to buy a cell-phone, the store couldn't even offer a demo model to play with in-store. All they had were plastic shells with stickers to show what the screen would look like. You can't even get a feel for the weight like that - let alone performance or usability. My point is that if enough people objected to that business model, and voted with their wallets to voice their objection, we would be one small step in the right direction.

But they don't, so we aren't; and every year new cell-phones run slower on faster hardware.

(The questions not asked.)

  • Are marketing people to blame? Partly. They need release dates. And when said date looms, the choice between "get it working" and "make it faster" is a no-brainer.
  • Are sales people to blame? Partly. They want more features in the checklist. They hype up feature lists and ignore performance. They (sometimes) make unrealistic promises.
  • Are managers to blame? Partly. Inexperienced managers might make many mistakes, but even very experienced managers may (quite rightly) sacrifice time to resolve performance issues in favour of others concerns.
  • Are specifications to blame? Partly. If something is left out of specification, it's that much easier to "forget" about it later. And if it's not specifically stated, what's the target? (Although I do personally believe that if a team takes pride in its work, they would worry about performance regardless.)
  • Is education to blame? Maybe. Education will probably always be on the back-foot. I certainly disapprove of "education" that rapidly churns out beginners with a superficial understanding software development. However, education that is backed up with theory, and instills a culture of learning can't be bad.
  • Are upgrades to blame? Partly. New software, old hardware really is tempting fate. Even before version X is released, X + 1 is in planning. The new software is compatible, but is the old hardware fast enough? Was it tested? A particular performance fix may be rolled into the new software - encouraging an ill-advised software upgrade.

Basically, I believe there are many contributing factors. So, unfortunately there's no silver bullet to fix it. But that doesn't mean it's doom and gloom. There are ways to contribute to improving things.

So, at what point did things go wrong for these products?

IMHO we can't really identify any single point. There are many contributing factors that evolved over time.

  • Bean counters: cost cutting, market timing. But then again would we have made the advances we have achieved without the pressure?
  • High demand and low supply of skilled people in the industry. Not just programmers, but also managers, testers, and even sales-people. Lack of skills & experience leads to mistakes. But then again it also leads to learning.
  • Bleeding-edge technology. Until a technology matures, it will regularly bite in unexpected ways. But then again it often provided a number of advantages in the first place.
  • Compounded complication. Over time, the industry has evolved: adding more tools, technologies, layers, techniques, abstractions, hardware, languages, variation, options. This makes it somewhat impossible to have a "full" understanding of modern systems. However, we are also capable of doing a lot more in a far shorter time as a result.

What can we as programmers do to avoid inflicting this pain on our own customers?

I have a few suggestions (both technical and non-technical) which may help:

  • In sofar as it's possible - use your own product. There's nothing like first hand experience to reveal things that are awkward, slow or inconvenient. However you will need to consciously avoid bypassing deficiencies due to "insider knowledge". E.g. If you have no problems synching contacts because you do it with a backdoor Python script - you're not using "the product". Which brings up the next point...
  • Listen to your users (preferably first hand, but at least second hand via support). I know programmers (generally) prefer to stay hidden away and avoid human interaction; but that doesn't help you discover the problems other people experience when using your product. E.g. You might not notice that the menu options are slow, because you know all the shortcuts and use those exclusively. Even if the manual fully documents all shortcuts, some people will still prefer the menus - despite being insufferably slow.
  • Strive to improve your technique skills and knowledge on a continuous basis. Develop the skill to critically analyse everything you learn. Reassess your knowledge regularly. In some cases, be prepared to forget what you thought you knew. Which brings up...
  • Some technologies / techniques can be very tricky leading to subtle misunderstandings and incorrect implementations. Others through the evolution of common knowledge or available tools may fall in or out of favour (e.g. Singletons). Some topics are so tricky that they breed a bunch of "hocus-pocus pundits" that propagate a huge body of misinformation. A particular bugbear of mine is the misinformation surrounding multi-threading. A good multi-threaded implementation can significantly improve user experience. Unfortunately a lot of misinformed approaches to multi-threading will significantly reduce performance, increase erratic bugs, increase dead-lock risks, complicate debugging etc. So remember: just because an "expert" said it, doesn't make it true.
  • Take ownership. (No seriously, I'm not playing boardroom bingo.) Negotiate with managers, product owners, sales people for performance features taking precedence over some checklist items. Demand better specifications. Not childishly, but by asking questions that get people thinking about performance.
  • Be a discerning consumer. Pick the phone that has less features but is faster. (Not faster CPU, faster UI.) Then brag about it! The more consumers start demanding performance, the more bean counters will start budgeting for it.
","22017","","-1","","2020-06-16 10:01:49","2013-09-22 19:50:39","","","","1","","","","CC BY-SA 3.0" "108816","2","","108812","2011-09-16 15:37:53","","14","","

In short, and from the viewpoint of a user of these systems:

A user of a CMS manages the content and structure of a website. A user of a CRM manages a company's contacts. A user of an ERP application manages invoices, product prices and inventory.

A CMS is something completely different than a CRM/ERP application.

There is often overlap between a CRM and an ERP, but a CRM is more focused towards sales people and an ERP system is more often used by administration.

I think wikipedia will tell you all you want to know.

","2820","","","","","2011-09-16 15:37:53","","","","0","","","","CC BY-SA 3.0" "6842","2","","6827","2010-09-23 21:25:33","","4","","

Rewriting a Telco grade voice mail system.

The previous system was running on Unix and around the late 90's Microsoft's COM technology came along. Many developers were working on this new NT based system. After a lot of effort its performance was still no where near that of the Unix system's and a big customer who bought this new system was pissed. Company had to be sold and some people had to leave the company.

It was ugly. All this happened about two years before Joel wrote his article: Things You Should Never Do, Part I

","42","","","","","2010-09-23 21:25:33","","","","0","","","","CC BY-SA 2.5" "109383","2","","109357","2011-09-19 21:04:33","","2","","

In my opinion, you should NOT do release planning as a team of 10 people. Most likely you will end up with a giant meeting where in any given discussion 6-8 people will feel completely disconnected and bored. Add to that the exhaustion of 3-4 hours being locked in a room together. And consider that if 10 people talk, you have way too much conversation. If they don't talk, you may not get valuable input.

We did something very similar to Joseph's company. Previous release we had 8 engineers and release planning took 2 solid weeks. And it was absolutely brutal. Few hours into each day, I think all of us start trying to speak as little as possible so that the meeting would be over sooner.

This release our team size more than doubled. So we broke up into smaller teams that would take permanent ownership of an area of a product. Each of the smaller teams had a lead. Then we did high-level release planning with just the leads, which went by way faster and more efficiently because now we only had 4 developers in a room. During this time, we identified which team would do what stories and how the product will be divided. Also this gave leads the larger picture of the entire product.

Then each lead went back to his own team and went over the portion of the release that only that team was responsible for. During this time, we filled in some details and assigned story point values.

Lastly, everything was put together and we did one final walkthrough (more of a presentation than discussion) so that everyone on the team knows what's going on with the entire team.

Although we didn't have a full successful release with this method, I do think that release planning overall went by way smoother than before and we got much more out of it. The key was that we never had more than 3-4 developers in any given meeting and everyone's voice was still heard.

If possible I'd recommend you split your 10 developers into 3 groups. If you can't divide your overall release into 3 mostly-non-overlapping areas, then even 2 groups would be better than one large team.

","20673","","","","","2011-09-19 21:04:33","","","","0","","","","CC BY-SA 3.0" "109465","2","","109442","2011-09-20 08:59:13","","27","","

Socket programming (at least as the term is normally used) is programming to one specific network API. Sockets support IP-based protocols (primarily TCP and UDP)1.

Network programming can be done using various other APIs. Windows has a number of protocol-independent APIs such as the WNet* and Net* functions. Older versions of Windows also used NetBIOS/NetBEUI (NetBIOS End User Interface), and most supported (and probably still do) IPX/SPX (an old Netware protocol).

Most current network programming, however, is done either using sockets directly, or using various other layers on top of sockets (e.g., quite a lot is done over HTTP, which is normally implemented with TCP over sockets). TCP/IP and UDP/IP (as well as a number of other IP-based protocols) are done primarily via the sockets interface. In theory, other programming interfaces could be used, but in practice sockets seem to be sufficient, so there's not a lot of interest in replacing it. I should, however, mention that Windows sockets (WinSock) have quite a few extensions that are more or less unique to Windows. I suppose it's open to some argument whether code that uses these extensions really qualifies as ""sockets"" code or not -- they are extensions based on the same concepts, but code that uses them isn't normally portable to other systems. I guess whether it qualifies as ""sockets"" or no depends primarily on whether you think of sockets more as a concept, or a very specific set of functions, parameters, etc.

Edit (in reply to comment):

It's a bit hard to say whether ""knowing sockets"" implies knowing ""everything"" about TCP and UDP. Let's consider just one small piece of things: one typical demo program for sockets is creating a client/server chat program. The client connects to the server, and when the user on one client types something, it gets forwarded to the other clients that are connected to the same server. Each client displays what comes in from the server, and lets the user type in messages to be sent to the other clients.

At the same time, consider what a ""real"" chat program like AIM, Windows Messenger, iChat, etc. involves. To handle not only text, but voice, video, file transfers, groups, lists, etc., a typical program probably involves a dozen different standards, including such things as SIP, STUN, TURN, RTCP, RTP, XAMPP, mDNS, etc.

IMO, somebody who ""knows sockets"" should be able to code up the first (demo-level, text-only) chat program in a few hours without spending much time in help files (and such) doing research. Unless they claimed at least some prior experience working on a ""real"" chat program, I wouldn't expect them to even know which RFCs/standards applied to such things though.

The same applies in general: given the number of RFCs (and various other standards) that get applied to all the different things people do over networks, it's unreasonable to expect anybody to have memorized all of them. Nonetheless, if you have a set of requirements for something that you'd expect people to be able to handle in a ""local"" program easily, just adding ""over the network"" as a requirement shouldn't normally add a tremendous amount of difficulty (though dealing with issues like network latency might).


1 Sockets on Unix also support Unix-family sockets, but these are (at least normally) used for intra-machine IPC, not networking. There are also literally dozens of other protocols for such things as router management that sockets don't really support (beyond raw sockets allowing you to build and send arbitrary packets).

","902","","902","","2014-03-04 14:22:21","2014-03-04 14:22:21","","","","3","","","","CC BY-SA 3.0" "7482","1","7487","","2010-09-27 00:34:14","","29","7949","

As per this question: I decided to implement the BitTorrent spec to make my own client/tracker.

Now, I was going through the spec, I was about 70% done implementing the BEncoding when I found a link to an implementation of BEncoding in C# written by someone else.

Normally, if I were working on production code, I'd use it as a reference to check my own work against, and a baseline to write some tests to run my code against, but I found myself thinking ""I'm making this, it's a for-fun project with no deadlines; I should really implement it myself - I could learn a lot"" but some voice in my head was saying ""Why bother re-inventing the wheel? Take the code, work it so that it's you're style/naming convention and you're done.""

So I'm a bit conflicted. I ended up doing the latter, and some parts of it I found better than what I had written, but I almost feel like I 'cheated'.

What's your take? Is it cheating myself? Perfectly normal? A missed opportunity to learn on my own? A good opportunity to have learned from someone else's example?

","1554","","-1","","2017-04-12 07:31:33","2010-12-11 03:23:41","What's your view on using other people's code?","","9","3","4","2013-09-10 08:21:38","","CC BY-SA 2.5" "109551","2","","109523","2011-09-20 16:28:57","","22","","
  • Set a good example. Make your own commit messages a shining example of usefulness. Include references to whatever other systems your team uses for managing stories and defects. Put a brief statement summarizing the change, and a good explanation of why the change is necessary and not something else in every submission.
  • Whenever the lack of a decent commit message causes you extra work, throw a question to the submitter. Be persistent with this (but not a jerk).
  • If it's not overstepping your role, write a script that sends a daily changelog using the commit messages. This will lend credibility to your argument that useful messages have a benefit beyond browsing through revisions. This might also help get management on your side, since they'll see day-by-day what's happening.
  • Identify your allies. Hopefully there's at least one other individual who agrees with you (perhaps by silently not disagreeing). Find that person or those people and convince them further so that you aren't standing alone.
  • When the opportunity to mention how decent commit messages have saved you time (or poor messages have cost you time) presents itself, seize it.

Don't be afraid to be the squeaky wheel. Fighting other peoples' bad habits is often a war of attrition.

","839","","","","","2011-09-20 16:28:57","","","","0","","","2011-09-20 19:06:15","CC BY-SA 3.0" "213395","1","213402","","2013-10-04 15:25:20","","27","8015","

It is commonly agreed that team managers should not be scrum masters, but I am struggling to see why. For context, I am an Application Development Manager with 4 devs in a Scrum Team. I come from a Scrum Master background, and have introduced scrum to the organisation. I have built the team from scratch and made it clear that everything I do is to facilitate the team, and that they make the decisions. As a team we are very open - they even silenced me at stand-ups for a while to eliminate the 'reporting' feel we were starting to get. A lack of openness is generally the biggest argument against the manager as scrum master, but handled well, is easily overcome with the right culture.

I've been warned by experienced scrum coaches that this is a dangerous situation, and there are risks 'if things go badly'. The way I see it the 2 positions do not conflict, in both roles I have the same aim for the team and individuals. Scrum resolves conflicts within the team, which could traditionally be a managers role. The self-managing nature of sprints takes away the allocation of work a manager would traditionally do.

All I really see left to pick up as a dev manager is making sure individuals needs are met, career objectives, workplace etc. I have a weekly catch up with each team member to raise any issues, and handle any admin tasks. A lot of this relates directly to the team, or my role as scrum master anyway.

I understand in large organisations how this could be unmanageable and a separate role but for a small organisation we could certainly not justify another Scrum Master or Development Manager.

Please enlighten me as to the pitfalls of Development Managers as Scrum Masters, excluding the points I raised above and have already overcome.

","98749","","98749","","2013-10-04 15:33:31","2015-06-19 12:18:57","What are the negatives of Development Managers as Scrum Masters?","","8","7","11","","","CC BY-SA 3.0" "213410","2","","213395","2013-10-04 18:37:44","","3","","

The biggest problem I see is the situation where the team has an issue related to you, the manager. If they are junior, or lack confidence, they may be afraid to speak up during retrospectives. This could limit the effectiveness of the retrospectives. Many people are afraid to say ""I feel the manager is being unrealistic"" when the manager is present.

So, maybe you excuse yourself from retrospectives to solve this problem. Now your team is doing retrospectives without a scrum master, also potentially limiting the effectiveness of the retrospective.

In either case, you're having a negative impact on the team.

","6586","","","","","2013-10-04 18:37:44","","","","0","","","","CC BY-SA 3.0" "213692","2","","213681","2013-10-07 22:57:53","","8","","

If you are going to have only a REST API (and no actual server-side ""pages""), the biggest challenges that come to mind for me are:

  1. Accessibility

    It's sad, but most screen readers are way behind the times and still can't cope with dynamic content. The modern solution is WAI-ARIA, and it's a steep learning curve.

  2. Indexing (SEO)

    To Googlebot and other spiders, your site is essentially a blank page. If you need content to be indexed, you'll have to do more than that. For truly ""bare"" APIs that don't serve HTML at all, you'll probably want to follow Google's recommendation of using your own JavaScript-capable spider to take snapshots of pages and serve up hashbang links.

  3. Cross-resource transactions

    Not every workflow is going to be cleanly encapsulated by a single resource. You might need to update an /invoice and /account at the same time. There still aren't any real standards for this; the typical answer seems to be to encapsulate the transaction itself as a resource, and handle atomicity in the REST API for the transaction resource. A shopping cart is a good example of a transaction; once you checkout, the entire basket is ""committed"" at once.

  4. Web Optimization

    You're taking on more than 10x the amount of JavaScript that is found in more conventional web apps. All of this has to be served to the client and run at some point, so minification and performance optimization are not optional, and it can be harder to optimize the client than the server when you consider that some of your users will still be running IE8 or IE7 on Windows XP on an original AMD Athlon machine that their nephew built for them 7 years ago. Also, since you'll be frequently changing your scripts, you can't just minify, you also have to version, otherwise your app will mysteriously break due to caching and such.

  5. Error Handling

    Most back-end web frameworks have this built-in. You just hook up a global exception handler, some logging, custom error pages, and you're done. It can be rather difficult to do this in JavaScript. You can hook up a global error handler to $.ajax or whatever, but even that is fraught with problems, since some error codes have semantic meaning (404, 409, etc.) Assuming you're even willing to go to the effort of setting up a logging API to collect errors, you'll have to make a trade-off between verbosity and bandwidth, and also put some kind of security or rate limiting on it so that people can't just spam your logging API to DoS your site.

  6. Testing

    At least in my experience, testing becomes more difficult because you need to maintain parallel suites of unit tests for client and server. If you do browser-based integration testing, it will be harder to tell if the failures are coming from the client or server side, unless you also maintain another suite of API-only integration tests. Basically, it's like adding another ""tier"".

  7. Authentication/Authorization

    There are standards for this (e.g. OAuth), but they're a pain to implement compared to more traditional models. This is especially true if you're running a mixed HTTP/HTTPS site and have to deal with crap like CORS. It's very much a solvable problem and lowest on my list for a reason; it'll just be extra work and bite you at odd times, because many of the back-end frameworks can be a bit... imperious about how authentication is supposed to work, and over-utilize things like session state.

    Also, your app is basically spilling its guts to everyone who uses it, making it trivially easy to reverse-engineer the API, so you'd better make sure you have lots of validation in place, otherwise any random script kiddie can 0wn you. Not that this is a new problem, you're just making it a little easier for them by providing a direct line.

All that being said, these are not even close to being showstoppers. Aside from the vastly improved user experience, you gain huge benefits from this model through reusability and separation of concerns. It becomes much harder (and much less necessary) for developers to abuse session state or other temporary data. And you can truly have front-end and back-end specialists, rather than having to expose your back-end guys to markup or your front-end guys to controllers.

I very much prefer this architecture but do be aware of the trade-offs you're making. It's still kind of a new/immature field - and people are still making a lot of mistakes. I wouldn't trust any ""best practices"" guide this early in the game.

","3249","","","","","2013-10-07 22:57:53","","","","1","","","","CC BY-SA 3.0" "110451","2","","110437","2011-09-24 20:51:59","","4","","

Thomas Owens comment is pretty to the point. Having been been a freelancer doesn't say anything about someone as a person or as a developer.

Personally, I have been working in software development in a couple of distinct capacities:

  • As an employee of a (large) consultancy organisation. I would work on their clients' projects at their clients' place of business. An hour-invoice type of deal, this was at a time when the concept of fixed price project was still at its infancy.
  • As a freelancer / self-employed contractor. Essentially this was the same as being employed through a consultancy organisation but I had the power to say ""no"" and could go after projects I liked.
  • As an employee of an in-house development shop.
  • As an employee of an independent software vendor.
  • As a business owner, developing smaller software applications for clients.

What does that say about me as a developer? Nothing.

There are many prejudices about freelancers, for example that they don't have any staying power, are easily bored, can only be trusted with the simplest of assignments. They probably are true for some, because if you do get that itch or your quality isn't up to scratch it may be easier to jump from project to project than to stay with one organisation for a longer period of time.

Freelancers are willing to take their skills and put them on the line. They get the boot quicker than any other employee, often simply because the money ran out or company politics killed the project they were hired to do. If someone has 10+ years experience as a freelancer, he or she has been able to pay the bills for all that time without the comfort of job-security. To me that is a positive. Even spells of many short contracts (3 months) are not necessarily a warning sign, while that would most certainly set alarm bells ringing for someone who was an employee all that time.

And what about someone who was employed all that time by a couple of consultancy organisations? They could easily hide the fact that they were booted from every project they worked on, simply by not mentioning specific/any projects or being vague about their duration.

Or someone who was employed by let's say three big organisations with large in-house development shops? Are they any better? More stable? Again it would be easy for someone with an employment history like that to hide less attractive facts in a general description of their employment. Never mind they were spat out by every team they were assigned to.

TLDR

The type of contract someone had when working on a project means tiddly squat. Having been a freelancer your entire career doesn't say anything about your worth as a developer. Nor does having been an employed person all your career say anything about your worth as a developer.

Your projects do. Your skills do. Your colleagues do. Your references do.

","1324","","","","","2011-09-24 20:51:59","","","","0","","","","CC BY-SA 3.0" "110550","2","","110510","2011-09-25 13:24:33","","6","","

I don't like the idea of a stale ""Written by"" header.

Correct. It's useless.

And a ""Written by"" header that just includes a long list of people with no context doesn't seem useful.

Correct. They're all dead, BTW. Or won the lottery. They're at sea and cannot be contacted. Why list them?

My favorite is using initials. Added 12/7/91 SRP. Who's that? And how will you find out? If they were a contractor, you'd have to pull all the invoices from around that date, call all the contracting firms that are still in business so they can pull all their personnel records.

what do you do about ""Written by"" headers from code that's given to you (from consultants, old code bases, downloaded from the net, etc)?

Ignore it.

","5834","","","","","2011-09-25 13:24:33","","","","0","","","","CC BY-SA 3.0" "110774","2","","110764","2011-09-26 16:29:35","","20","","

At a simple level, yes. Simply performing a Waterfall every two weeks does not make you agile, but it is iterative (which is half of agile).

The waterfall model defines phases - requirements, architecture, design, implementation, verification (testing), validation (acceptance testing), and release. In any iterative methodology, you go through each of these phases within every iteration. There might be overlap between them, but you elicit and capture requirements, adopt the architecture and design of the system to allow for implementation, develop the new features or fix the defects, test the new modules, and then present it to the customer for acceptance testing and deployment.

However, there's a lot more to agile than just being iterative and incremental. The tenants of agile are captured in the Manifesto for Agile Software Development. There are four key points made in the Manifesto:

Individuals and interactions over processes and tools

You involve individual people frequently. Many implementations are centered around self-organizing and self-directing teams. Nearly all have frequent interactions with the customer or someone who has voice of the customer. Rather than having a formal set of procedures to follow and tools to use, you let the people working on the project drive how the project gets done to let it get done in the best possible manner.

Working software over comprehensive documentation

In a software project, the primary goal is the delivery of software. However, in some projects, there is wasteful production of documents that add no value. Scott Ambler wrote a good article on Agile/Lean Documentation. It's not about not producing documentation, but about choosing documentation that adds value to your team, future developers, the customer, or the user. Rather than producing documentation that doesn't add value, your software engineers are instead producing software and associated tests.

Customer collaboration over contract negotiation

Rather than defining the terms and timetables and costs up front, it becomes a continuous effort with the customer. For example, you might capture your requirements in the form of user stories and assign them points. After a few iterations, you settle on a velocity (points/iteration) and can determine how many features your team can implement in an iteration. As your customer provides feedback on which features add the most value, they can decide when the project is done at any point. Any number of things can happen with frequent delivery and customer interaction - the requirements have been satisfied and the project concludes into maintenance and eventually end-of-life, the customer finds out that they don't need everything they thought so decides to end the project, the project is failing and the customer sees this early and can cancel it...the list goes on.

Responding to change over following a plan

You don't have a big design or ultimate plan up front and have to perform rework whenever that design or plan has to change. You continually estimate and revise estimates based on the information that you have. You choose your metrics carefully to provide insight into the health of the project and when to make internal changes. You frequently add, remove, and reprioritize requirements with the customer. Ultimately, you understand that change is the only constant.

Being agile means focusing on people and meeting their needs by delivering high-quality, value-adding software quickly. As the needs of the customer change, you adapt to those needs to focus on adding value. There are specific implementations of agile methodologies, but they are all centered on people, timely delivery of working software, and adapting to a rapidly changing environment.

","4","","-1","","2017-04-12 07:31:30","2011-09-26 16:44:31","","","","1","","","","CC BY-SA 3.0" "335167","2","","335166","2016-11-02 11:37:52","","9","","

Even if you are working continuously on a project, unless the project is tiny, you'll have to switch between features, some of which you haven't modified for months or years.

And if you work in a team, chances are you will constantly discover parts of the code base you haven't written in the first place.

So what makes it possible to reduce the time wasted rediscovering the project?

  • Refactoring. This is the most important technique which will allow you to spend less time asking yourself what was you thinking about when you wrote a piece of code two years ago.

    It is not unusual, when developing a new feature, to try ideas, and to care less about architecture and design, simply because requirements may be unstable, and you may not be sure how the requirements should be implemented. However, once the feature is implemented, the work is not done. With the help of regression tests, come back and refactor the code. Regularly.

    As explained by DocBrown, by being refactored regularly, your code will become self-explanatory, which has a huge benefit when coming back to the project in a few years: instead of constantly switching between code and documentation, you'll be able—ideally—to simply follow the code. If you also follow the five principles of SOLID, then it would become much easier to bring changes later: sometimes, you'll simply be able to create a new class for a new feature, without even touching to the existing code. In other cases, you'll need to modify the existing code, but you'll know that the changes will be limited to one or few classes, with little risk of creating regressions somewhere else.

  • Architecture and design. Do you have one? (It seems from your question that you do) If not, it will be difficult from the code to get a larger picture; thus, understanding will suffer.

  • Style. Coding standards matter because they make it easier to read the code. As explained by DocBrown in his excellent comment:

    A class naming standard can help to find the correct class to change more easily. A naming convention for making a distinction between local and non-local variables makes it easier to understand the impact of a planned change. Moreover, exploring an existing project involves lots of code reading, so everything which makes the code easier to read will speed up the process.

    Given that it is usually very easy to enforce a common coding standard by using an existent standard and enforcing it automatically in the pre-commit hook, there are no excuses not to use one.

  • Documentation. You don't need to write a one hundred pages document explaining everything in detail: it will be boring, and you won't enjoy reading it. However, a few diagrams would provide a tremendous help later. Documenting different choices is a good idea too: for instance, if you decided to write your own thing instead of using something which exists already, explain why you did so: maybe third-party solutions were incompatible with the system, or too slow, or had some severe limitations.

  • The list of terms. Quoting from your question:

    As there is some jargon used, I need to read through the docs just to recall the terms being used.

    In some projects with a lot of technical (or domain) terms, I was actually creating a list of terms. The goal is then to ensure that you update this list regularly, and that you use those terms consistently.

    The list should be as affordable as possible. If it's not easy to read, you (and your pairs) won't use it. If it's difficult to modify, it will quickly become outdated. Note that it takes time and practice to get a format which will suit you.

Note that whatever you do, you will spend some time understanding the code two years later, even if it is the code you've written yourself. Look at open source projects: some are written by talented developers who do a great job, but even then you'll spend a few hours to a few days exploring the project before you can contribute to it. Your code is not different: you can't remember every detail for years, which means that when coming back to your project a few years later, you find yourself in the same situation as if the project was done by someone else. Moreover, your style may have changed; you've learned new tools, language features and APIs, which could only widen the gap between you today and you two years ago.

it took me a couple of weeks to get into the details of implementation

Depending on the project size and your meaning of details of implementation, spending a couple of weeks may not be exceptional; if it takes four hours to develop a feature for your customer, and you ask to pay for a couple of weeks and four hours, things may go wrong. Make sure you learn about your project only things you need to know in order to implement the feature.

For instance, if the product is an e-commerce website, and the feature is a change in the way PDF invoices are generated once a user bought something, there is only a tiny part of the project you need to understand. You don't have to know, for instance, how products are displayed on the website, or how cart works. Unless it's a legacy spaghetti codebase, your changes will only affect the code which deals with PDF generation of invoices, and won't propagate to other parts of the product.

","6605","","-1","","2017-04-13 12:45:55","2016-11-03 11:09:31","","","","5","","","","CC BY-SA 3.0" "111290","2","","99389","2011-09-28 11:12:20","","14","","

SOAP or REST? Other answers do a good job of helping you argue the point from a technical perspective. However I predict a KO is unlikely simply because technically you could do most things with either approach. There are a the few exceptions that may lead to a technical knockout e.g.:

  • if API requests should be routable through external messaging middleware while still allowing the recipient to authenticate and verify the original sender, then SOAP wins
  • if you want to use your existing application's authentication and access control logic, then REST can be virtually a NOP, while for SOAP that can be a mini-project in it's own right.

If you don't have one of these exceptional requirements as a must-have, and since you are not the boss(!), I think the best you can hope for is a draw because although the decision appears ""technical"" it will remain subjective.

But .. if you want to make the best decision (and not just win an argument), maybe you can push to look at this, with your boss, in a different way:

Since you ""need to create an API to our system"", I'm inferring this is not just an internal technical detail of you system and ""Technical arguments are for technical people -- aka, people who will be doing the work"" doesn't apply. There's a group of people out there somewhere that will have to deal with whatever you deliver, and I think you would probably like them to use it and love it? If so:

What they need for the API probably trumps any arguments you or your boss can come up with (at least that's what they would think)

e.g. if they will want to integrate with your API via BizTalk or somesuch, then maybe SOAP it is (document literal and all). But if they are coders that will be writing to your API, SOAP may be the death knell for adoption, while REST will make you heroes.

If you already know who these people are, I reckon you should ask them what they need from an API. If it's a ""new market"", then maybe try to rope in the best representatives you can find, or at least try to describe and understand what the ""representative customer environment"" is going to be to help inform the decision.

In other words, I'd recommend you see if you can find the customer voice either from real external customers, or others in the organisation or partners who can do a good interim job pre-launch.

(then when they tell you ""REST! Don't you dare give us some Rube Goldberg SOAP monstrosity"", you can smile knowingly at your boss)

","37621","","","","","2011-09-28 11:12:20","","","","2","","","","CC BY-SA 3.0" "111402","2","","111301","2011-09-28 23:17:08","","9","","

I think the first thing to realize is that there is a difference between being Agile and being agile. Slowly rolling out agile techniques and characteristics - cross-functional teams, adaptive planning, evolutionary/incremental delivery, time-boxed iterations, and even introducing concepts from Lean are very different than introducing Extreme Programming, Scrum, or Crystal.

You explicitly mention customer involvement. Yes, many of the Agile methodologies call for customer involvement, but that's not required to be agile. In every government/defense related program, I've always had a program or project manager who was the point of contact with the customer. This person becomes the ""voice of the customer"". It might be slowed down as they teleconference or email or call and clarify, but you can have a single person (or a group, if you have deputy PMs as well) that is the customer representative of your team. Admittedly, it's not quite the same. But isn't being agile about being flexible and responding to change?

You also mention a few key concepts: predefined requirements, having feature requests ""thrown over the wall"", a lack of prioritization because ""they are all important"", and fixed-cost and/or fixed-schedule projects. Each of these can be addressed in different ways.

If you think you have all of your requirements up front, chances are you don't. Requirements do change. Just because you have a ""finished and signed off"" specification doesn't mean it is set in stone. Given whatever requirements document you have, capture them how you feel comfortable and/or in the manner specified by the contract and deliver the requirements, the design, and the architecture. In addition, see if you can treat these are living documents (a design document I saw today at work is labeled as Revision G, which means it's on it's 8th update). Ask about how much you can leave as TBD in any given iteration and how much needs to be firmed up now - there might be some give and take.

Be agile with your documentation. Don't duplicate efforts between ""what your team wants"" and ""what the customer wants"". For example, if your customer wants a traditional software requirements specification and your team wants to use user stories, try to adapt to a traditional SRS and use action items and a rolling action item list instead of user stories so that you don't spend time formulating both ""the system shall..."" and "" must be able to because "". This does take discipline on the part of the team, though, to adapt to differences between projects. Capture problems in reflections.

Once you get to development, you might run 5 or 6 iterations, and then invite your customer to your facility to see a subset of your implementation. Rinse and repeat this process. It's not the constant involvement demanded by some methodologies, but you do have the advantage of high visibility. If your customer says no, at least you tried. If they say yes, you can enlighten them on being agile. On one project I was on, the customer visited the site every few months (3-5 months, usually). They would watch us go through QA testing, they would discuss concerns with engineers, and business with the program/project office. It was an opportunity for everyone to get on the same page.

Testing and maintenance happen the same as on other agile project. Create your test procedures and document defects in the appropriate way, track metrics per contractual obligations, and document test results. If you want to follow TDD, go for it. Continuous integration is another good idea. During project status meetings, your project manager can use this information to say ""we implemented N requirements, have M tests, X tests pass"" and update on project health and status to the people with the money.

Speaking of money, we have the fixed-cost and/or fixed-schedule problem.

Dealing with a fixed schedule is fairly straightforward. Given your requirements, you know how many iterations that you can complete. Your workload for each iteration is pretty much set in stone in terms of features to implement, test, and integrate. It might be difficult, but it's not impossible to break up features and assign them to iterations in advance. This goes back to my point about inviting the customer - if you have one year and are using 2 week iterations, perhaps invite the customer quarterly (and invite them every quarter) and show them the results of the previous work. Let them see your prioritization of requirements, your future plans, and how you are going about scheduling.

Dealing with a fixed budget is similar. You know how much time you have, how many resources you have for the project, how much they cost, and therefore how many hours everyone can work per iteration. It's just a matter of ensuring that everyone keeps track of this carefully. If your company can eat the cost of overtime, go for it. Otherwise, make sure everyone works the appropriate length of time and use good time management skills and time-boxing to keep everyone productive. More productive hours is what you need to keep costs down - deliver more value-adding documents and software without the cost of meetings and overhead.

Ultimately, it's not about necessarily being Agile, but applying the things that make Agile good and being agile. Be able to respond to changes in requirements, be able to deliver frequent software even if the customer doesn't want it, only produce value-adding documentation (along with whatever you are contractually obligated to produce), and so on.

","4","","","","","2011-09-28 23:17:08","","","","4","","","","CC BY-SA 3.0" "112083","2","","112057","2011-10-03 12:21:12","","2","","

Establishing in advance you will be paying contributors, and making it public, is likely to attract not-so-passionate-about-the-product fellows. I wouldn't.

Depending on your jurisdiction, it could also need some authorizations.
(I can imagine with ease some thick bloke interpreting that as a contest of sorts)

Maybe writing a check, here and there, to a couple of professional programmers very involved in the process, people you end up considering precious to the project, let's say ""premium contributors"", looks safer.

Get invoices for that, (you should be able to deduce those from your earnings) and make sure they are business entities. (or that, anyway, according to your local laws, you're not accidentally becoming their employer and should be paying them social security)

","30396","","","","","2011-10-03 12:21:12","","","","0","","","","CC BY-SA 3.0" "336067","2","","334305","2016-11-15 00:49:15","","1","","

Some smart people voiced their opinions on this already, but I just think that it isn't the room's responsibility to know what its neighbours are.

I think it's the building's responsibility to know where rooms are. If the room really needs to know its neighbours pass INeigbourFinder to it.

","81981","","","","","2016-11-15 00:49:15","","","","0","","","","CC BY-SA 3.0" "336108","2","","336104","2016-11-15 15:59:29","","5","","

How to reorganize

If you do the reorganization wrong, you'll eventually have to deal with the angry team. In the best case, they will just revert your changes; in the worst case, they will have to deal with the new organization, even if they find it highly irrational. You can easily avoid this.

creating unnecessary churn, and insulting my co-workers

Don't do it alone. Talk with your coworkers. Prepare a reorganization plan together.

Discuss the branching strategy with your coworkers as well. You're probably not in a position to decide which one would be better for everyone, so you have to consider what your colleagues are thinking of different strategies (and their opinion is way more important than an opinion of some guy from SoftwareEngineering.SE, especially since we don't know anything about the project topology and organization, the team and their past experience and habits).

Note that your team may also decide that there is no need for reorganization, for three possible reasons:

  • The codebase is good enough. Yes, it may not be the best codebase out there, but if the team knows it well, it may be perfectly OK for now.

  • This is not a good moment for it. If you worked on a non-Agile project for the last eight months and you are about to ship the first release in two weeks, you may postpone touching to the repository for now.

  • The reorganization can be made through small changes over time. In other words, there is no need for one big reorganization. Just like in a codebase, you may have major changes which would affect hundreds of classes, but you'd better make small, localized refactorings, each one moving you towards the target while ensuring everything works well. Reorganization of the repository is very similar: what if you spend four hours reorganizing all the stuff around, and when you finish, you find that the build is broken, and that while you can build the project on your machine, 80% of tests now fail?

When to reorganize

Once your decided that the repository needs a big reorganization which can't be done through many small steps, the next question to answer is when to do it.

  • Agree on a date when the repository will be reorganized, so that your colleagues would get ready for it by committing all the pending changes before leaving in the evening. The day of the migration, make sure to remind your coworkers to commit their changes at lest twice (for instance during morning's standup, and personally to every member of the team when you see that the person is about to leave in the evening).

  • Do the backups.

  • Do the damn backups. Seriously. Things happen, and you don't want to explain to your coworkers that all the work they did for the last week was lost, because automated backups were configured to run once per week.

  • If possible, do the reorganization at night, when nobody is working on the source code. Make sure you have enough time and resources for a case when something goes wrong. What if the build starts to fail once you reorganize the repository? What if some tests start to fail? What if there are warnings reported by the Continuous Delivery?

  • Once the operation is finished, send an e-mail to your team, explaining the changes which were made, the new organization, and the decisions you took during the reorganization.

  • Be with your team the next morning, to assist them if something goes wrong when they update their local source or if they don't understand the new organization.

What to reorganize

As I already explained, it's up to your team to decide exactly what should be the new organization.

Different developers won't even agree on branching strategy: One developer may prefer one branch per feature; another one would suggest a branch per developer. I usually suggest to commit directly to trunk, which works for me and for several of my colleagues, but would fail completely for other teams.

Similarly, it's even more difficult to give a magical solution to how files should be organized, without even knowing the current topology of your repository.

However, I can suggest a few things which may apply to most projects and which had positive results in my case in the past. Remember, if your team suggests the opposite, you should absolutely take their opinion over mine:

  • Don't create a new TFS project. I'm not sure if history can be preserved across TFS projects (and whether it's easy to do), but even if it is, I don't see the point: the only case where you would do it is if a separate team/company will continue working on the old project, which is not your case.

  • Don't do old area/new area. You are reorganizing for a good reason: because your team decided that the old organization sucks. There is no need to keep it in any form. Forget about it. Move, don't copy.

  • Remove stuff aggressively. There is a prototype your team wrote a year ago, and the concepts shown by this prototype were implemented in the project? Get rid of the prototype. There is a project you started but abandoned six months ago? Remove it. The benefit of version control is that it keeps everything, forever, so you know that if you need something which was removed, you can still find it. This is exactly the same as cleaning your project by removing code which is not used any longer.

  • Deduplicate. Obviously, if you have duplicate stuff, get rid of the duplication. This also often leads to the following point:

  • Rely on proper dependency management. Since you're talking about TFS, I would suppose that you deal with .NET projects. If you find DLLs stored within the repository, this is a very bad sign: you shouldn't be doing it, but instead, using NuGet to handle dependencies for you. For dependencies between the components written inside your company, there is such a thing as private NuGet servers.

  • Clean up the mess. While Visual Studio does a decent job of automatically ignoring the files which should be ignored at version control level, it happens that some files such as user settings or binaries still get in. They have no place in a version control, so get rid of them.

  • Put non-development stuff outside version control. If you work with graphical designers, you may end up with Photoshop documents checked in the version control. This is not a good idea, because too many binary documents changed too often could bring your version control to knees. Non-developers successfully use different revision management strategies than developers, thus, there is no place for them in TFS.

    Note that if you actually have non-development stuff in the repository, then your reorganization meetings should also include people who work with this stuff. If they actually do benefit from version control, setting a dedicated repository (and probably even a dedicated server) for them would be a good idea in order to keep yours clean from binary content.

  • KISS. I've seen technical leads drawing magnificent diagrams of branches and merges in all directions. Maybe those guys are very smart, but it often appears that they are the only one who actually understand all this mess, and often even they don't bother using it later. Both project structure and branching should be as simple as possible. For project structure, prefer flat one over an hierarchy. It's OK if you end up with a list of five hundred directories because you have five hundred projects; much more problematic would be to ask yourself six months later if the project which helps accountants to generate PDF invoices from CRM should go to “Customer relations”, “Accounting”, “Tools” or “Misc”. More about trees on UX.SE.


Note: working in the past as technical lead, I had a few situations where I had to reorganize the source repository or move the source from one system to another (such as SVN to TFS). Every time, the operation went well. Not because I'm good at it, but because we worked as a team deciding together what to do, when, and why. Once everyone was happy with the decisions we took, I just had to stay late in the evening and simply perform the operations by following our plan.

","6605","","-1","","2017-04-13 12:46:08","2016-11-16 01:19:14","","","","2","","","","CC BY-SA 3.0" "112451","2","","112349","2011-10-04 23:27:55","","2","","

Excuse me, but this question sounds like a minor asking for fatherly advice. If this is the case, the good developer will need to embrace these commandments:

  • Remain faithful to yourself. If your gut feels uneasy about a feature, voice your concerns audibly. Chances are good that the team is just awaiting an opening.
  • Do not try to substitute experience with the rules of thumb of the experienced. To you, every situation is different, every feature is new. This is a plus your seniors don't have.
  • Software development isn't exact science, it will never be. Therefore, accumulate wisdom, not behaviour.
  • Accept defeat. If the team agrees otherwise, do not repeat your concerns ad nauseam.
  • Think positive. If the idea is really begging for 'shooting it down', try to find and name positive aspects to it before you list its deficiencies.
  • Learn how to interact with people. We developers often place technical knowledge over social competence. The technical abilities peak early in life, but the social competence can keep growing until retirement.
","36729","","","","","2011-10-04 23:27:55","","","","0","","","2011-10-05 11:46:46","CC BY-SA 3.0" "112590","2","","99050","2011-10-05 16:01:19","","4","","

I'd suggest a two step process. Building creates the installer, which goes out to your server. The client machines are set up to pull the installer on startup/midnight/maintenance window and run it silently.

So you continuously build throughout the day, but the live systems grab the new installer at the appropriate time. It's not so much a matter of having any one thing do all the work of the build/deploy cycle. The point is having all of it automated which is a different animal.

The classic example is having a robot cook a meal. Most people start designing how the robot will open the door to the fridge, check produce by feel, so on and so forth. But a proper design throws out the concept of a kitchen! A robot for cooking would be the whole room. There would be a part that opens the prebagged produce, already determined correct, and moves it via conveyor belt to the stove. At the stove, multiple arms with built in utensils would begin whipping up the output product. The thing doesn't need hands, it doesn't need legs, it almost doesn't need eyes.

Your build system should not be: ""take what a person would do/currently does and have a program step through it exactly."" Start from what you want to accomplish. When your process does that, the deploy is done, wherever you've defined that to be.

","27114","","","","","2011-10-05 16:01:19","","","","0","","","","CC BY-SA 3.0" "336351","2","","335886","2016-11-18 22:54:28","","3","","

I've written a lot on this subject on SoftwareEngineering.SE in the past, and was in similar situations myself. Therefore, I'll attempt to give a few hints and highlight a few issues I noted when reading your question.

But first, let's talk about an important aspect: your role in the company.

Your role

You may have an explicit mandate from your boss to enhance things, and also a place in the hierarchy where other developers have to listen to your orders. Or you may be among peers, having the same role and the same authority, your option being only... well... an opinion.

In both cases, what matters is less your place in the hierarchy, and more:

  • What other developers think of you. If they treat you as an annoying guy who ask them stupid things, you won't get far. I've seen many cases where technical leaders and project managers had absolutely no influence on the team, because the team knew (or thought) that those “leaders” had no technical background required to take decisions they were taking. On the other hand, I've seen several developers who were actually listened by their peers, because they knew those developers are skillful and experienced.

  • How solid is your team and what motivates them. Imagine a company where every developer is paid for KLOC/month. Would anything you say about style matter to your colleagues? Probably not, because rare are persons who want to be paid less. In general, if this is not a team but just a group of persons working on the same project, you won't be able to improve anything.

Depending on that, you may decide whether it's worth the effort to make any change. If you have no voice and there is no team cohesion, just go look for another job. If you're known as a talented, respected developer and the there is a strong team feeling, you'll be able to improve things relatively easy, even if faced with the hostility from your boss or other teams.

In all cases, it is essential not to make pressure on your team. Work with them, not against them. Don't give them orders, but guide them towards the goal.

Now, the hints.

Style

I once asked nicely to follow the coding style and formatting of the majority of existing code (sadly we don't have a formal coding style document). But it didn't work...

Of course it didn't, since this is not the way it should be done.

  • Style is boring.

  • Following style is boring.

  • Writing coding style document is boring (and damn difficult; don't even try doing it unless you have worked with the language for more than ten years).

  • Reading style document is boring.

  • Reviewing code for style mistakes is boring.

  • Trolling that my style is better than yours is exciting, especially when there is absolutely no objective benefit of one style over another. Seriously, every sane person knows that the right way to write if (x) is the way I wrote it, not if(x) or if ( x )!

Therefore:

  • Don't do style reviews. This is the job of style checkers. Those cute applications have a few benefits over your brain: they check the entire project in a matter of milliseconds, not hours or days, and they don't do mistakes and don't miss style errors.

  • Don't write your own style standard. You'll do it wrong anyway, and your coworkers will troll you that you made bad choices.

  • Don't force developers to fix 2 000 style errors.

  • Do enforce style automatically on commit. Code which has style mistakes has no place in version control.

  • Do it from the beginning of the project. Setting up style control in an existent project is difficult to impossible.

For more on that, read the first section of this other answer on SE.SE.

Also:

  • Don't be too strict. For instance, writing jslint-compliant code is quite annoying, so it should be done exclusively when absolutely needed (or if all the members of your team are happy using it). The same goes for static checking tools; for instance, .NET's Code Analysis at maximum level could be very oppressive and depressing, while bringing little benefit; the same tool set at moderate level, on the other hand, proves to be very helpful.

Code reviews

Now that you don't need to bother about style during code reviews, you can focus on more interesting stuff: enhancing (vs. fixing) the source code.

Different persons react differently to code reviews. Some consider it an opportunity. Others hate it. Some listen to everything you tell them, take notes, and don't discuss, even if they could be right. Others try to argue on every point. It's up to you to find a way to deal with every developer according to her personality. It is usually helpful to:

  • Do code reviews in private, especially when the developer is junior and writes a really bad code.

  • Show that there is nothing personal: you are reviewing the code, not the person's skills.

  • Show the actual goal of a code review. The goal is not to show how bad a developer is. The goal is to provide opportunities for improvement.

  • Never argue. You're not here to convince, but to provide your expertise.

  • Never assume the reviewee is the only one who can learn something from a review. You're here to learn too, both by reading the code and by asking explanation about the parts you don't understand.

Once the code review is done, make sure the person actually improves her code. I had a few cases where developers thought that code review ends when the actual meeting ends. They leave and go back to their new features, trying to apply what you shared with them for new code only. Having a decent tracking tool for code review helps.

Note that independently of your particular role in the company and your expertise compared to others, your code should be subject to review as well. You shouldn't be the only one reviewing others' code either.

In a recent project where I worked as a technical leader, I had a hard time explaining to my coworkers that it's their role to do the reviews of each other's code, including mine. The fear of an intern who is about to review the code of his technical leader disappears as soon as he finds the first issues in the code—and who among us writes flawless code?

Training

Code reviews are a great opportunity to teach and learn some of the aspects of programming and software design, but others require training.

If you are able to train your coworkers, do that. If your management is hostile at the idea of training, do it informally. I've done such training sessions in a form of informal meetings, or sometimes even as a simple discussions, sometimes interrupted by management and pursued later.

Aside direct training, make sure you know well enough the books such as McConnel's Code Complete, and talk about those books to your coworkers. Suggest them to read source code of open source projects, give them specific examples of high quality code. And, obviously, write high quality code yourself.

Focus on context, not on persons

How can I address this situation without just focusing on 'bad company culture', 'inexperienced graduates', etc.

Those graduates have a goal: acquire experience, learn stuff, become more skillful. If, year after year, they write crappy code and know nothing about programming, it's probably because your team or your company is not giving them this opportunity.

If you're focusing on the fact that your team has inexperienced graduates, this won't help. Instead, focus on what you can do for them and with them. Code reviews and training are two of the techniques to improve the situation.

Bad company culture is a different beast. Sometimes, it can be changed. Sometimes, it cannot. In all cases, remember that you are part of this company, so you are part of the company culture. If you can't change it and find it inherently bad, sooner or later, you'll have to leave.

Get your metrics right

How exactly do you measure code right now? Do you measure the number of commits per day per developer? Or the KLOC per month per programmer? Or maybe the code coverage? Or the number of bugs found and fixed? Or the number of potential bugs caught by regression tests? Or the number of reverts done by Continuous Deployment server?

Things you measure matter, because team members are adapting their work to the factors which are measured. For instance, in one company where I had to work a few years ago, the only thing which was measured was the time one spends in the office. Needless to say that this wasn't encouraging to deliver better code, or to work smarter, or... well, to work at all.

Figuring out positive and negative reinforcement and adjusting the measured factors over time is essentially the leverage you have on team members. When done properly, it makes it possible to achieve results which won't be achieved by simple hierarchy.

The things which bother you, make them measurable. Measure them, and make the results public. Then work together with other team members to improve the results.

For example, let's consider that team members make too many spelling mistakes in the names of classes, class members and variables. This is annoying. How could you measure that? With a parser, you can extract all the words from the code, and using a spell checker, determine the ratio of words containing mistakes and typos, say 16.7%.

Your next step is to agree with your team on the target ratio. It could be 15% for the next sprint, 10% for the next one, 5% in six weeks, and 0% in two months. Those metrics are recomputed automatically on every commit, and displayed on a big screen in the office.

  • If you don't achieve the target ratio, your team may decide to spend some more time fixing spelling mistakes. Or your team may consider it better to compute the ratio per developer, and display this information on the big screen as well. Or your team may find that the goal was too optimistic, and that you should review it.

  • If you achieve the target ratio, the next step is to make sure the number of mistakes and typos won't increase over time. For that, you can create an additional task in your build which checks for spelling mistakes, and fails the build if at least one mistake is found. Now that you got rid of this problem, your big screen may be reused to show the new relevant statistics.

Conclusion

I believe that every aspect mentioned in your question can be solved through the techniques I included in my answer:

  • When other developers joined the project, I noticed that they use a different coding style (sometimes a completely different style)

    You had to enforce style automatically on commit.

  • and often don't use modern language features like property accessors (this is relatively new in Objective-C).

    Both code reviews and training are here to transfer your knowledge of the language.

  • Sometimes they would invent their own bicycles instead of using similar features of the framework

    Both code reviews and training are here to transfer your knowledge of the framework.

  • or transfer concepts from other programming languages or patters they learned into our code base.

    This is an excellent thing. Seems like an opportunity for you to learn from them.

  • Oftentimes they can't name methods or variables properly because of bad English

    Code reviews should also focus on proper naming. Some IDEs have spell checkers too.

  • Sometimes I think if it wasn't for the IDE I think they would write all code with no indentation or formatting at all.

    Of course they would. Style is boring and should be automated.

  • Basically, I hate the code they write.

    Remember from the code reviews part: “The goal is not to show how bad a developer is. The goal is to provide opportunities for improvement.”

  • It's badly formatted/organized, and sometimes is radically different from the rest of the project.

    Automated style checking.

  • I feel very upset when they add their spaghetti to my piece of art

    Wait, what?! Piece of art?! Guess what? Some persons (including you in six months) may find your code far from being a piece of art. Meanwhile, do understand that considering your work as a piece of art and their work as crap won't help anyone. Including you.

  • It feels more and more like they can't be bothered to learn or don't care: they just do what's required from them and go home.

    Of course they will do what's required from them. Remember: context, not persons and get your metrics right. If the context requires from them to become best at what they do, they will do it. If the context requires to produce as many KLOC per month as possible and nothing more, they'll do it too.

","6605","","-1","","2017-04-13 12:45:54","2016-11-18 23:26:51","","","","4","","","","CC BY-SA 3.0" "11349","2","","9814","2010-10-12 17:42:22","","4","","

I've found it easier to talk about things on the task board/card wall/kanban board . Talk only about what is blocking the movement of cards on the wall. Take anything else off line, if it it is not related to the cards on the wall.

This keeps the standup focussed and relevant. Only let people in the team talk ( those who are working on things that are on the wall) everyone else has to be silent till the stand up is over

Avoid technical discussions. Standups are about progress , removing blockers and communicating what everyone else is doing.

Only let one person talk at a time. Have a standup token, only the person having the token may talk and they may hold the token only for a minute.

Be disciplined and brutal. For a team of 5-6 shouldn't last longer than 10 minutes. Make it quick and snappy and full of energy.

The stand up is for the team, no one else, not HR, not Management and not even the CEO. The team decides when to have the standup and the team sets the agenda.

Have a look at common standup smells http://martinfowler.com/articles/itsNotJustStandingUp.html

","5168","","","","","2010-10-12 17:42:22","","","","1","","","","CC BY-SA 2.5" "215939","2","","215938","2013-10-24 16:52:36","","3","","

The stakeholders have their say at the end of the Sprint Review, which is their point of time to voice concerns, influence the possible forecast the upcoming sprints and give feedback.

The Sprint Retrospective is for the Scrum Team. In some cases only the Development Team and the Scrum Master are present at this meeting. The product owner should join, often the relationship with the product owner is part of what can be improved, so inviting him/her should make that process easier. Also, any changes to the DoD (that can be made as outcome of the Retrospective) need to be taken past the Product Owner, maybe the team missed an important reason (like legislation) for the things to be as they are. In the end the Product Owner is end-accountable for the product and it's quality, so he should have a say.

In the Professional Scrum Developer course we also include a remark that you might get a more open communication going by asking the Product Owner to leave the meeting at some point. If that is the case, the lack of transparency and trust should be addressed at some point in a retrospective in the future, I'd say...

Stakeholders have no place at the retrospective. If they want changes to how things are going they will need to go through the Product Owner. If there are issues between stakeholders and the team, it might be a good idea to do a separate meeting (not a retrospective) with the whole scrum team present (incl SM and PO) to put the cards face up on the table and work out the issues.

Have you asked the manager why he thinks these stakeholders need to be included? What does he want to get out of that meeting? Figure out which meeting would be the right place for that concern to be addressed and who should be the one addressing it. There might be a need to plan something that's outside of the standard scrum meeetings, which of course is allowed.

Relevant passage from the scrum guide:

The Sprint Retrospective is an opportunity for the Scrum Team to inspect itself and create a plan for improvements to be enacted during the next Sprint.

","47730","jessehouwing","78862","","2015-11-24 23:34:32","2015-11-24 23:34:32","","","","2","","","","CC BY-SA 3.0" "11938","2","","11874","2010-10-14 09:08:16","","8","","

Make the support job fun and valuable to your developers.

I love to do support for the following reasons:

  • I talk with people all around the world. I made many friends like that. Few years ago, one of my customer invited me to his wedding! I used to have a map of the world in my office with pins that located them.
  • Support is almost the best get gratifications for your work. When you make users happy, it really makes you happier too.
  • Complaints are useful way to improve yourself. I take any complaint seriously, and in most case, I can convert someone angry into an happy customer/user that will eventually spread the word around.
  • It helps me understand what customers/users need. Then I can build better software.

That's just few reasons.

Regarding support itself, I suggest to implement an easy to manage process.

When we get a support case, we do the following:

  • If it's a reproducible bug, we add it into the backlog and give its ID to the customer/user. We also take the ID of the customer/user to notify him of resolutions and release personnaly. This is easy if you collect his email directly.
  • If it's a problem using the software, we take this as an opportunity to improve the documentation. Any answer is written like a knowledge base article that we add in our database afterwards. It takes triple the time to write, but we don't repeat ourselve later (most users prefer browsing in KB).
  • If it's a feature request we connect the user with the product owner directly. This is very valuable. Of course we use systems like uservoice.com, but talking with the user directly is a lot better.
  • If it's a complaint we try to manage that outside the process. People that complaint like to be considered as important (even if the complain is trivial).
","","user2567","","","","2010-10-14 09:08:16","","","","5","","","","CC BY-SA 2.5" "11941","2","","11312","2010-10-14 09:20:58","","1","","

Many people have mentioned a quite or silent workplace which is often not only impossible but is actually almost as bad as a noisy one. I can't stand utter silence, it's creepy, so here is my inexpensive suggestion:

A white/pink noise generator

Like a clock that has a white/pink noise generator in it. A lot of them have additional sounds but the beaches have annoying bird noises and rivers make me have to pee so the best ones are the sound modes that simulate rain. My favorite is rain on a tin roof.

","","Pickle Pumper","","","","2010-10-14 09:20:58","","","","0","","","2010-10-14 09:20:58","CC BY-SA 2.5" "336768","1","","","2016-11-24 14:01:45","","9","363","

In our company several teams will work on different components of several projects at the same time. For example, one team might make make specific kinds of software (or hardware) for some project(s), another team another specific kind of software. We use Jira projects to host issues for specific projects and Jira boards for sprints for different teams.

We face the issue of avoiding code duplication across projects, and have developed a set of core libraries which we use in those projects. While working on a project, some developer will realize that a piece of code they have written is of greater interest and should be extracted into a core library, or that some core code they are using has a bug, needs some more parametrization, or a new feature... you name it.

So they create a core library issue that goes into the core project's backlog. All these issues are reviewed, prioritized, and estimated in a core library meeting (once a week), and will be tackled according to their priority (alongside project-specific issues) in some future sprints.

Prioritization is done by sorting issues, and we put a sorted label on sorted issues (so we can search for non-sorted ones). Then we manually put one issue per core component to the top of the backlog in order for them to be tackled first. When some team puts such an issue into their sprint, they have to manually drag another item to the top of the backlog instead.

This is quite error-prone. Basically, what we have is the additional issue statuses ""sorted"" and ""estimated"" between ""open"" and ""in progress"". Reflecting this through the sorted label and their position in the board is rather cumbersome and error-prone. (For example, if someone moves an issue in some sprint up and down, this will be reflected in the core board, silently scrambling the order of issues the team might have had decided about in an extensive discussion weeks earlier.)

So what would be a better way to implement this?

","1512","","1512","","2016-12-12 08:46:15","2017-01-30 13:01:17","How to model story preparation for issues which are tackled across several projects","","3","3","1","","","CC BY-SA 3.0" "12244","2","","12229","2010-10-15 14:08:16","","7","","

It is encouraged

We get 1 day a week for non-invoicable stuff such as learning, reading blogs, blogging, administration, preparing presentations for the weekly devcafés*, ...

Our boss prefers that we focus on sharing knowledge in that time.

We're actually building a dashboard for our intranet that will display the ratio ""knowledge sharing / non-invoicable time"".


* devcafés: dev team sits together 1 hour/week and 1 team member presents a new technology, methodology, ..

","2820","","2820","","2010-10-15 14:13:34","2010-10-15 14:13:34","","","","1","","","","CC BY-SA 2.5" "216308","2","","216289","2013-11-02 16:02:51","","7","","

As other have noted, this depends on whether or not msgSender can be legitimately NULL. The following assumes that it should never be NULL.

void PowerManager::SignalShutdown()
{
    if (!msgSender_)
    {
       throw SignalException(""Shut down failed because message sender is not set."");
    }

    msgSender_->sendMsg(""shutdown()"");
}

The proposed ""fix"" by the others on your team violates the Dead Programs Tell No Lies principle. Bugs are really hard to find as it is. A method that silently changes its behavior based on an earlier problem, not only makes it hard to find the first bug but also adds a 2nd bug of its own.

The junior wreaked havoc by not checking for a null. What if this piece of code wreaks havoc by continuing to run in an undefined state (device is on but the program ""thinks"" it's off)? Perhaps another part of the program will do something that is only safe when the device is off.

Either of these approaches will avoid silent failures:

  1. Use asserts as suggested by this answer, but make sure they are turned on in production code. This, of course, could cause problems if other asserts were written with the assumption that they would be off in production.

  2. Throw an exception if it is null.

","9830","","-1","","2017-04-12 07:31:44","2013-11-02 18:10:01","","","","0","","","","CC BY-SA 3.0" "12417","2","","12401","2010-10-16 15:31:38","","3","","

You're right, the rule applies to protocols, and not programming. If you make a typo while programming, you'll get an error as soon as you compile (or run, if you're one of those dynamic types). There's nothing to be gained by letting the computer guess for you. Unlike the common folk, we are engineers and capable of saying exactly what me mean. ;)

So, when designing an API, I would say don't follow the Robustness Principle. If the developer makes a mistake, they should find out about it right away. Of course, if your API uses data from an outside source, like a file, you should be lenient. The user of your library should find out about his/her own mistakes, but not anyone else's.

As an aside, I would guess that ""silent failure"" is allowed in the TCP protocol because otherwise, if people were throwing malformed packets at you, you would be bombarded with error messages. That's simple DoS protection right there.

","2276","","","","","2010-10-16 15:31:38","","","","1","","","","CC BY-SA 2.5" "12626","1","12665","","2010-10-18 05:13:57","","5","232","

For most open source project, there is a well-founded project team and corporate sponsorship, and a lot of active contributors. The procedure for filing bug reports are clearly documented.

However, there are also some open source project(s) that have been in existence for more than 10 years (maybe 15), and were included in all sorts of free and commercial products (OSes and linux distros, etc), and everyone just assumes it is correct, despite some parts of it in a state of despair and full of bugs.

It appears to me that the real users (programmers in-the-know) simply choose to use the library in a certain way as not to trigger the bug. Few choose to speak up.

There are also big-name companies that fix the bugs quietly (in their own products) without giving out any patches. And use that to their business advantage.

There is no leading developer. There is no information as to who are the active developers, except that you can browse the mailing list and see who has recently submitted patches, and assume that they might know someone who is helpful.

How should I handle a vulnerability case, without leaking information in a way that gives ammunition to the bad guys?

This question is a spin-off from: https://softwareengineering.stackexchange.com/questions/5168/whats-the-biggest-software-security-vulnerabilty-youve-personally-discovered

","620","","-1","","2017-04-12 07:31:41","2010-10-18 11:17:51","How do I report software vulnerabilities found in an open source library that are widely used but have a dilapidated team structure?","","4","0","","","","CC BY-SA 2.5" "13601","2","","12556","2010-10-21 17:51:32","","1","","

If this is going to be an extra-curricular activity, don't do the homework thing. That's just lame.

You could probably get something cool going by just starting up a github group and postering/emailing in your school (I guess kids these days use Facebook and Twitter too? Might be a good idea to hit those points as well). When you get a group of 5-6 people who are really interested together, decide on a project and just work at it.

If there's no interest, it's pretty ridiculously easy to join an open source project if you're reasonably skilled. Simple as forking something you're interested in at github, and starting to talk to the developers.

The advantage you have that the previous generation didn't is that it's not difficult at all to connect to programmers at your level, and in your language, while being very geographically disparate. And I don't mean just send an email. Skype/iChat/Ventrilo make voice conferences easy, tools like git/mercurial (and the associated project pages online) make it easy to code as a group even if you're on opposite sides of the atlantic. There's really no reason not to code socially these days, if that's what you want to do.

Finally, don't make a habit of judging people by the languages they know/want to know. It's an easy trap to fall into when you're the only Smalltalker in a herd of people who think C++ represents the limit of programming, but it won't get you many friends, and it'll give you a bias against certain tools. I've met hackers who are miles ahead of me in skill and experience, who have used LISP, Perl and C on the same project. The people at the top of the professional developer heap tend to not care much what level their tools are as long as they do the job.

","2592","","2592","","2010-10-21 18:27:10","2010-10-21 18:27:10","","","","0","","","","CC BY-SA 2.5" "13875","2","","2192","2010-10-22 14:53:51","","9","","

Conversations of others

and noise in general

Many answers talk about context-switching and getting out of the zone, and noise, especially conversation, is one of those things that leads to those for me.

In my cubeworld, I'm surrounded by noise and conversation on all sides. One row over, the mainframe team holds constant planning meetings in the cube row. Sometimes, they'll meet with consultants in an office along the wall, and that tends to lead to loud hootin' and hollerin' and laughin' and I have to go over and ask them to close their doors.

On the other side, the web team conference table is on the other side of my west cube wall, so I am part of every meeting, like it or not. There's also a printer on the other side of the south cube wall, and that's always good for chit-chat from people hanging out waiting for their printouts.

The immediate and obvious answer of ""Can't you just get noise-canceling headphones"" doesn't help when what you want is silence.

Sometimes for code reviews, I take my stack of papers to the lunchroom (at non-lunch times, of course), but there's a TV in there that's usually blaring. I'll turn it off if no one is watching. Otherwise, I'll go find an empty cube in an other department in another part of the building.

If you want your programmers to do the work they need to do, which is predominantly thinking and pondering and considering, they need an environment where they can do it.

","4887","","","","","2010-10-22 14:53:51","","","","1","","","2011-01-05 06:12:52","CC BY-SA 2.5" "115287","2","","115282","2011-10-20 07:55:43","","10","","

So you are always starting your developments first by writing the user interface ? I would say that this way of doing is generally associated with ""prototype"" development in professional environments, but then again not necessarily. In small companies writing ""small"" applications and starting with the UI happens very often (also it happens even more often that the ""small"" application you started grows and grows and grows and becomes a monster application requiring refactoring, additional developers... but I digress)

So it is not necessarily a bad idea, it depends on the size/complexity of the application that is being written and you said it yourself you are writing these programs for yourself. Psychologically, having an UI may also help in that you see your advancements and that gives you motivation. Whereas if you are writing your code-behind and test cases first it may take a dew days before you really start to dwelling into the UI... Unless your mind accepts that test cases are achievements too :)

However, as you said it yourself, doing the UI first without thinking about the backbone of the application will lead you into trouble. It is very tempting to start the ""fun"" part (I am like you, I like having neat interfaces). But, before that, and even for small programs, just stop for a moment, take a piece of paper and draw some diagrams about how your application works. How your components will reacts to UI events. What are the business (or game!) rules to implement. Prepare a quick to-do list of things to implement and prioritize them by what you think is critical. Also, when programming, resist the urge to implement not critical features/options that will inevitably come to your mind, note them down and take a look at them after cooling down.

You should strive to achieve maximum separation of UI and business/game logic. Ideally, your game code should work whether you have a command-line game (for example: ""play e2e4""), a simple 2d UI, or a full-fledged 3d UI driven by the latest version of DirectX. Complete separation is not an easy job and it is where knowledge of Object-Oriented and event-driven programming will be most useful.

Now, the good thing is that you both love programming and writing programs, and you are just a little over-enthusiastic... No scratch that. You are enthusiastic and that is a good thing.

On the refactoring subject, you may not feel the need for it, but maybe it is because you know that your code is a mess and that it would take forever to change things. However, if you organize your ideas a little bit before starting, you should have less code to refactor (but there will (should?) always be a little voice in your head that says ""this could work better if...""). Check out other questions on this website on the subject of refactoring, other people have covered much better than I could do it.

Finally, if you feel you are not writing proper OO code take a look some books you could read in order to improve on that, for example: https://stackoverflow.com/questions/1711/what-is-the-single-most-influential-book-every-programmer-should-read

","32008","","-1","","2017-05-23 12:40:23","2011-10-21 07:45:03","","","","4","","","","CC BY-SA 3.0" "219070","2","","219069","2013-11-21 02:31:56","","1","","

Have you tried making pre-lessons that people look over individually?

Make short videos or presentations that explain the content, how the code works, or basically all that you want to teach them in a format where they need to look at it on their own and learn the information you are trying to teach them.

Then you use the team-based sessions to discuss issues related to the content. You need to distinctly identify that the team sessions are for discussing and troubleshooting issues related to the content only.

If you provide the lessons to people on an individual basis, you may be able to avoid that other social issue where a single matter can become the voice of the group as a collective and distract away from the actual purpose of the lessons.

","109107","","","","","2013-11-21 02:31:56","","","","3","","","","CC BY-SA 3.0" "338334","2","","338328","2016-12-17 15:38:03","","4","","

Should all dev teams everywhere 'do' micro services?

Despite the benefits of the Microservices architecture, the answer is of course, not everyone should.

If not, how do you decide whether micro services are appropriate for your environment?

That's hard to answer, because any architecture is the technical response to a political -strategical- question. The company business strategy matters here, so, the decision should not be a mere question to resolve among developers.

I suggest reading Martin Fowler's post about Microservices. Also reading the links. They worth the read.

Reading about Microservices' strengths, you might get the answer to your main question: What criteria should you use to decide whether to 'do micro services'?

Usually, the microservices as architecture fits well in complex systems.

Complexity is defined by different bounded contexts. Like business units that could operate and evolve independently from each other, but they achieve a common goal working altogether.

The complexity of the system is not caused by the unitary complexity of each ""bounded context"". It's caused by the need of having to get rid of all of them in the same solution.

A bounded context can be as simple as: user's management, registration, invoicing, reporting, event tracking, security (authentication and authorization), ...

That being said, it doesn't mean that small projects should not adopt this architecture. As usual, it depends on whether the gains outweigh the implicit costs. And overall, if it responds to a real need.

I would take in account -seriously- the trade-offs described in the article linked above:

  • The extra baggage of managing this kind of systems, reducing the productivity
  • The knowledge (in deep) of the domain (microservices)
  • The development complexity
  • The company capacity to manage:
    • Rapid Provisioning
    • Basic Monitoring
    • Rapid Application Deployment
    • Devops Culture

Finally, getting involved any stakeholder in the company is a must. Just to assure that everyone know the implications.

","222996","","293672","","2018-05-14 05:35:03","2018-05-14 05:35:03","","","","0","","","","CC BY-SA 4.0" "115816","2","","115791","2011-10-23 14:33:00","","4","","

As it's defined in traditional Scrum, there isn't a problem with a Developer also functioning as a Product Owner. However, you do need to take care when planning to account for anyone who is performing their role part-time, either because they are working on multiple projects or because they have multiple roles on the same team. In your case, you can not count yourself as a full-time developer because you need to budget time in each iteration to perform the duties of the Product Owner.

I think that you also have a misunderstanding of what the Product Owner does. It is not your responsibility to choose which features go into an iteration. Instead, it's your job to be the voice of the customer on the project, when it comes to introducing new stories, assigning priorities to these new stories, and ensuring that the implementation of each story is acceptable through the creation and execution of acceptance tests. The choice of stories is based on the velocity of the team and the prioritized backlog, not by how many stories the Product Owner wants to implement.

","4","","","","","2011-10-23 14:33:00","","","","0","","","","CC BY-SA 3.0" "219979","2","","219976","2013-12-01 17:16:02","","34","","

Both describe the consistency of an application's behavior, but ""robustness"" describes an application's response to its input, while ""fault-tolerance"" describes an application's response to its environment.

An app is robust when it can work consistently with inconsistent data. For example: a maps application is robust when it can parse addresses in various formats with various misspellings and return a useful location. A music player is robust when it can continue decoding an MP3 after encountering a malformed frame. An image editor is robust when it can modify an image with embedded EXIF metadata it might not recognize -- especially if it can make changes to the image without wrecking the EXIF data.

An app is fault-tolerant when it can work consistently in an inconsistent environment. A database application is fault-tolerant when it can access an alternate shard when the primary is unavailable. A web application is fault-tolerant when it can continue handling requests from cache even when an API host is unreachable. A storage subsystem is fault-tolerant when it can return results calculated from parity when a disk member is offline.

In both cases, the application is expected to remain stable, behave uniformly, preserve data integrity, and deliver useful results even when an error is encountered. But when evaluating robustness, you may find criteria involving data, while when evaluating fault-tolerance, you'll find criteria involving uptime.

One doesn't necessarily lead to the other. A mobile voice-recognition app can be very robust, providing an uncanny ability to recognize speech consistently in a variety of regional accents with huge amounts of background noise. But if it's useless without a fast cellular data connection, it's not very fault-tolerant. Similarly, a web publishing application can be immensely fault-tolerant, with multiple redundancies at every level, capable of losing whole data centers without failing, but if it drops a user table and crashes the first time someone registers with an apostrophe in their last name, it's not robust at all.

If you're looking for scholarly literature to help describe the distinction, you might look in specific domains that make use of software, rather than broadly software in general. Distributed applications research might be fertile ground for fault-tolerance criteria, and Google has published some of their research that might be relevant. Data modeling research likely addresses questions of robustness, as scientists are particularly interested in the properties of robustness that yield reproducible results. You can probably find papers describing statistical applications that might be helpful, as in climate modeling, RF propagation modeling, or genome sequencing. You'll also find engineers discussing ""robust design"" in things like control systems.

The Google File System whitepaper describes their approach to fault-tolerance problems, which generally involves the assumptions that component failures are routine and so the application must adapt to them:

This project for a class at Rutgers supports a ""component-failure"" oriented definition of ""fault tolerance"":

There are loads of papers on ""robust modeling XYZ"", depending on the field you investigate. Most will describe their criteria for ""robust"" in the abstract, and you'll find it all has to do with how the model deals with input.

This brief from a NASA climate scientist describes robustness as a criteria for evaluating climate models:

This paper from an MIT researcher examines wireless protocol applications, a domain in which fault-tolerance and robustness overlap, but the authors use ""robust"" to describe applications, protocols, and algorithms, while they use ""fault-tolerance"" in reference to topology and components:

","63209","","","","","2013-12-01 17:16:02","","","","0","","","","CC BY-SA 3.0" "16144","2","","16137","2010-11-02 01:32:00","","5","","

Start making some friends with larger voices than yours.

Social networking is a great tool for this: influential people on Twitter, Facebook, Buzz, what-have-you love sharing new and interesting things their followers might enjoy. The novel link is like currency. So, think about people who are popular and have a large audience and would be interested in your work. Then, just let them know about it.

To facilitate this, you should be treating your project just like you would a startup: come up with an elevator pitch that succinctly describes what it is your project does, what problem it solves, and why someone should care. A blog or some sort of record of progress over time is also valuable, as people who are interested in following a project generally want to see how it evolves just as much, if not moreso, than the project itself.

9 out of 10 times if you're not spammy about it, realize you're talking to a person who is just trying to find something cool, and your project is interesting in its own right, they're going to talk about it to others, or at least link to it.

","","user8","","","","2010-11-02 01:32:00","","","","0","","","","CC BY-SA 2.5" "16225","2","","16159","2010-11-02 12:26:13","","3","","

Testivus on Test Coverage -- From the Google Testing Blog:

Early one morning, a young programmer asked the great master:

“I am ready to write some unit tests. What code coverage should I aim for?”

The great master replied:

“Don’t worry about coverage, just write some good tests.”

The young programmer smiled, bowed, and left.

Later that day, a second programmer asked the same question.

The great master pointed at a pot of boiling water and said:

“How many grains of rice should put in that pot?”

The programmer, looking puzzled, replied:

“How can I possibly tell you? It depends on how many people you need to feed, how hungry they are, what other food you are serving, how much rice you have available, and so on.”

“Exactly,” said the great master.

The second programmer smiled, bowed, and left.

Toward the end of the day, a third programmer came and asked the same question about code coverage.

“Eighty percent and no less!” Replied the master in a stern voice, pounding his fist on the table.

The third programmer smiled, bowed, and left.

After this last reply, a young apprentice approached the great master:

“Great master, today I overheard you answer the same question about code coverage with three different answers. Why?”

The great master stood up from his chair:

“Come get some fresh tea with me and let’s talk about it.”

After they filled their cups with smoking hot green tea, the great master began:

“The first programmer is new and just getting started with testing. Right now he has a lot of code and no tests. He has a long way to go; focusing on code coverage at this time would be depressing and quite useless. He’s better off just getting used to writing and running some tests. He can worry about coverage later.

The second programmer, on the other hand, is quite experience both at programming and testing. When I replied by asking her how many grains of rice I should put in a pot, I helped her realize that the amount of testing necessary depends on a number of factors, and she knows those factors better than I do – it’s her code after all. There is no single, simple, answer, and she’s smart enough to handle the truth and work with that.”

“I see,” said the young apprentice, “but if there is no single simple answer, then why did you tell the third programmer ‘Eighty percent and no less’?”

The great master laughed so hard and loud that his belly, evidence that he drank more than just green tea, flopped up and down.

“The third programmer wants only simple answers – even when there are no simple answers … and then does not follow them anyway.”

The young apprentice and the grizzled great master finished drinking their tea in contemplative silence.

","2329","","","","","2010-11-02 12:26:13","","","","0","","","","CC BY-SA 2.5" "16707","2","","16701","2010-11-04 02:22:51","","8","","

Time tracking is a wonderful tool for:

  • making your estimates more accurate
  • managing the size of your team
  • justifying invoices when a client takes issue with what they are being billed
  • providing more data for performance bonuses (temporal efficiency is important, but only if it comes with quality)
  • finding the drag in your workflow so that you can become more efficient over time
  • choosing the types of work at which you can be more cost-effective/efficient than your competition
  • scheduling projects more effectively

The problem is that when done wrong (which is easier than doing it right) time tracking itself can be the drag on your workflow. I have a colleague who, in a very unscientific study of me sharing an office with him for two days during which I was curious enough to time him, spent 15% of his time documenting how he spends the other 85%!

To my mind (though I admit I'm a better technician than business strategist) that is way too much overhead for time tracking. In a small company, doing it this badly is, in my opinion, worse than not doing it at all.

","5516","","","","","2010-11-04 02:22:51","","","","2","","","","CC BY-SA 2.5" "116746","2","","116730","2008-09-26 12:54:50","","133","","

I've had similar problems as you do. The two main strategies that have helped me are

  • Only one project at any time: I've suffered from following more projects than I can count on my fingers, each ""clamouring"" for attention. Now I've radically cut down on projects either by finishing them ""once and for all"" or by simply dropping them altogether. Earlier this year I've founded a company and now I'm down to three projects: Health, Family and Company.

  • Separation of concerns: When doing everything on one desk, the risk is high to ""drift"" from one thing to another. I've removed all procrastination stuff from my work PC and use my Laptop only for ""play"" and other private internet usage (mails, userfriendly, slashdot). The PC is on my desk, the Laptop in the Living Room. This keeps a healthy distance between Company and private stuff.

Of course these two things are quite general stuff. Some of the smaller, but also helpful things:

  • No Lurking on IRC/other chat channels. Either I need or give support/community in the project I'm working on or I'm not in that channel.
  • Close The Mailer. Checking mails because the project just compiles is just stupid, since waiting for a compile is just enough time to see whether or not there is mail. If there wasn't any mail, I've interrupted my flow for nothing and if there was mail, I'd either have to interrupt my flow even more to handle it or punt it anyways. So now, I'm checking my mails three times a day and have reduced my interruption count significantly.
  • Exercise. Often while programming I feel the urge to jump up and run around in my room. Especially when sitting before the tougher design decisions. Going biking every other day has significantly improved my ability to concentrate on stuff as well as the added benefit of improving overall stamina and well being.
  • Spent Time Bookkeeping. I've got a simple spreadsheet where I enter my Company time and some private stuff. I keep it to 15 minute chunks, which makes data entry much easier and any smaller units just cause more overhead. If I'm not doing something I can ""bill"" on the Company and it's between 8:00 and 18:00 I know I'm doing something wrong.
    Also, at the end of the week I get a nice report how I spent my time. One big caveat here though. When I started this after finishing university it was a hard blow for me how little time I was spending ""productively."" It took me quite a while to recognize, that I need to record everything I don't do for Family. Specifically:
    1. I need to record times spent exercising as productive. See above.
    2. I need to record times lost due to external factors: I'm travelling a lot lately and when I've only recorded 25 hours of work in a week, I suck. But if I add the two days I spent on the road that week, I see that I did more than 40 hours. Suddenly ""I suck"" changes into ""the external-factors-that-cause-my-travels suck,"" which is a much healthier thing to say.
  • Eat and Sleep Regularly. Stand up at 07:00, Breakfast, Lunch at 12:00, Dinner at 18:00, Sleep from between 22:00 and 23:00.
  • Appreciate the Small Successes. Even if I'm not yet there, today is better than yesterday and tomorrow will be better than today.
  • Adjust you Environment. That's quite a broad topic. As a home worker, I got myself a nice new desk and chair which I now use exclusively for work.
    Also I really like listening to music, but vocals -- especially in my mother tongue -- distract me incredibly. I've tried instrumental music, which worked for a while until the trance beats got to my nerves. Now I'm going for the complete silence. It might be different for you, but there's only one way to find out for real: experiment and watch yourself while working.
  • Become Accountable. Get a Conscience. I founded my Company together with an old friend, whom I deeply respect. By his presence and by knowing that our success is now is interlocked, I feel compelled to give my best.

  • And finally Constant Vigilance! Distractions tend to creep up from every nook and cranny of your life (stackoverflow anybody? ;). Keeping them at bay and managing them will stay a constant struggle. Having said this, I have to close my stackoverflow tabs and get back to programming!


PS: I've talked with someone from my family who is working with ADHD kids. She told me that ADHD is a kind of catch-all/fallback diagnostic (see the ADHD Wikipedia entry for corrobation: DSM-IV V.) and is hard to diagnose ""scientifically"" since the patient has to be monitored in different settings over a longer period of time AND other causes for the symptoms have to be excluded. Practically ADHD is handled as ""the condition helped by the prescribed medicines"", since there currently are no globally accepted non-psychiatric assesment test procedures and not enough knowledge about the underlying biochemical functions. Again, quoting Wikipedia: ""There are several effective and clinically proven options to treat people with ADHD. Combined medical management and behavioral treatment is the most effective ADHD management strategy, followed by medication alone, and then behavioral treatment.""

From what I gathered from the discussion with her, the problem is that doctors often choose (cheap, symptom-oriented) medication over (expensive, cause-oriented) therapy with little regard to the long-term effects on the patient.

","","anon","","","","2009-10-27 21:41:04","","","","11","","","2011-10-28 15:48:27","CC BY-SA 2.5" "16900","2","","16869","2010-11-04 19:27:39","","2","","

First thing's first - ask all you are worried about, don't leave anything to assumptions. I cannot stress this enough - I once assumed something and later regretted it.

Secondly, and this is (IMHO) quite important too : Ask them if you can walk around company offices.

Observe the people working there. If there is something wrong, you'll ""just know it"". Part of this comes from stressed people releasing all sorts of pheromones, and the other part comes from the eery, inexplainable silence, one of the sort of the wood being chopped in the forrest in which there is no one to hear the sound of it falling.

","5185","","","","","2010-11-04 19:27:39","","","","1","","","","CC BY-SA 2.5" "117261","2","","117243","2011-11-01 15:43:28","","6","","

First let's remember that we all generally have jobs to support the deals the sales guys make rather than to construct technically perfect programming-art. If they don't make the deals you don't have a job.

That said the trick is to find ways to work with sales to make everyone look good. Processes where the tech team can at least voice an opinion about proposals before they go out the door is key. Finding creative ways to handle compensation also helps alot -- if sales has to ""tip out"" engineering when engineering incurs massive overtime making an unrealistic schedule work it seems to drastically cut down the frequency of death march projects.

","3762","","","","","2011-11-01 15:43:28","","","","3","","","2011-11-01 23:06:21","CC BY-SA 3.0" "17441","2","","17341","2010-11-07 09:21:21","","4","","

Generally you should strive towards making the compiler silent, so that new warnings show out more. These warnings may indicate subtle bugs and should be handled accordingly.

Regarding fixing other peoples code, it strongly depends on both your workplace culture and the current state of the code. You cannot just alter code if it triggers a complete retesting cycle, like it would for code late in the test phase or in production.

Ask your boss, and act accordingly.

","","user1249","","","","2010-11-07 09:21:21","","","","0","","","","CC BY-SA 2.5" "221441","2","","221425","2013-12-15 21:53:54","","7","","

If you look at the examples in the article you cited, most of the time using Maybe doesn't shorten the code. It doesn't obviate the need to check for Nothing. The only difference is it reminds you to do so via the type system.

Note, I say ""remind,"" not force. Programmers are lazy. If a programmer is convinced a value can't possibly be Nothing, they're going to dereference the Maybe without checking it, just like they dereference a null pointer now. The end result is you convert a null pointer exception into an ""dereferenced empty maybe"" exception.

The same principle of human nature applies in other areas where programming languages try to force programmers to do something. For example, the Java designers tried to force people to handle most exceptions, which resulted in a lot of boilerplate that either silently ignores or blindly propagates exceptions.

What makes Maybe is nice is when a lot of decisions are made via pattern matching and polymorphism instead of explicit checks. For example, you could create separate functions processData(Some<T>) and processData(Nothing<T>), which you can't do with null. You automatically move your error handling to a separate function, which is very desirable in functional programming where functions are passed around and evaluated lazily rather than always being called in a top-down manner. In OOP, the preferred way to decouple your error handling code is with exceptions.

","3965","","","","","2013-12-15 21:53:54","","","","3","","","","CC BY-SA 3.0" "118106","1","118195","","2011-11-06 03:25:48","","4","1389","

I have some previous posts talking about how to use python to ""do something"" when a record is inserted or deleted into a postgres database. I finally decided on going with a message queue to handle the ""jobs""(beanstalkd). I have everything setup and running with another python process that watches the queue and ""does stuff"". I am not really a ""systems"" guy so I am not sure what is a good way to go about monitoring the process to make sure if it fails or dies that it restarts and sends a notification. Google gave some good ideas but I thought asking here I could get some suggestions from people that I am sure have had to do something similar.

The process is critical to the system and it just needs to always work and if its not working then it needs to be addressed and other parts of the system ""paused"" until the problem is fixed.

My thoughts were to just have a cronscript run every minute or two that checks to see if the process is running. If not it restarts it. Another script (or maybe just another function of the first) would be to monitor the jobs and if the jobs waiting to be processed hit a specific threshold to also flag that there is a major problem.

Specifics about process... The process updates the orders in a legacy system with the qty's of items that are shipped or back ordered from our warehouse. SO if these things are not done then when the order is invoiced it will have incorrect qtys and the people involved wouldn't have a good way to spot this unless they are checking each line. I thought I might also have a flag on the order that says ""yes i have been touched"" and if its hasn't to just notify the invoice agent.

This same method is going to be used for updating orders with shipping information based on when orders are shipped from UPS Worldship.

I don't know, i think i have a handle on this but it just feels ""kludgy"".

","25221","","","","","2012-01-12 17:13:35","Monitor Process","","1","0","2","","","CC BY-SA 3.0" "221658","2","","221615","2013-12-17 16:43:42","","604","","

dynamic languages make for harder to maintain large codebases

Caveat: I have not watched the presentation.

I have been on the design committees for JavaScript (a very dynamic language), C# (a mostly static language) and Visual Basic (which is both static and dynamic), so I have a number of thoughts on this subject; too many to easily fit into an answer here.

Let me begin by saying that it is hard to maintain a large codebase, period. Big code is hard to write no matter what tools you have at your disposal. Your question does not imply that maintaining a large codebase in a statically-typed language is ""easy""; rather the question presupposes merely that it is an even harder problem to maintain a large codebase in a dynamic language than in a static language. That said, there are reasons why the effort expended in maintaining a large codebase in a dynamic language is somewhat larger than the effort expended for statically typed languages. I'll explore a few of those in this post.

But we are getting ahead of ourselves. We should clearly define what we mean by a ""dynamic"" language; by ""dynamic"" language I mean the opposite of a ""static"" language.

A ""statically-typed"" language is a language designed to facilitate automatic correctness checking by a tool that has access to only the source code, not the running state of the program. The facts that are deduced by the tool are called ""types"". The language designers produce a set of rules about what makes a program ""type safe"", and the tool seeks to prove that the program follows those rules; if it does not then it produces a type error.

A ""dynamically-typed"" language by contrast is one not designed to facilitate this kind of checking. The meaning of the data stored in any particular location can only be easily determined by inspection while the program is running.

(We could also make a distinction between dynamically scoped and lexically scoped languages, but let's not go there for the purposes of this discussion. A dynamically typed language need not be dynamically scoped and a statically typed language need not be lexically scoped, but there is often a correlation between the two.)

So now that we have our terms straight let's talk about large codebases. Large codebases tend to have some common characteristics:

  • They are too large for any one person to understand every detail.
  • They are often worked on by large teams whose personnel changes over time.
  • They are often worked on for a long time, with multiple versions.

All these characteristics present impediments to understanding the code, and therefore present impediments to correctly changing the code. In short: time is money; making correct changes to a large codebase is expensive due to the nature of these impediments to understanding.

Since budgets are finite and we want to do as much as we can with the resources we have, the maintainers of large codebases seek to lower the cost of making correct changes by mitigating these impediments. Some of the ways that large teams mitigate these impediments are:

  • Modularization: Code is factored into ""modules"" of some sort where each module has a clear responsibility. The action of the code can be documented and understood without a user having to understand its implementation details.
  • Encapsulation: Modules make a distinction between their ""public"" surface area and their ""private"" implementation details so that the latter can be improved without affecting the correctness of the program as a whole.
  • Re-use: When a problem is solved correctly once, it is solved for all time; the solution can be re-used in the creation of new solutions. Techniques such as making a library of utility functions, or making functionality in a base class that can be extended by a derived class, or architectures that encourage composition, are all techniques for code re-use. Again, the point is to lower costs.
  • Annotation: Code is annotated to describe the valid values that might go into a variable, for instance.
  • Automatic detection of errors: A team working on a large program is wise to build a device which determines early when a programming error has been made and tells you about it so that it can be fixed quickly, before the error is compounded with more errors. Techniques such as writing a test suite, or running a static analyzer fall into this category.

A statically typed language is an example of the latter; you get in the compiler itself a device which looks for type errors and informs you of them before you check the broken code change into the repository. A manifestly typed language requires that storage locations be annotated with facts about what can go into them.

So for that reason alone, dynamically typed languages make it harder to maintain a large codebase, because the work that is done by the compiler ""for free"" is now work that you must do in the form of writing test suites. If you want to annotate the meaning of your variables, you must come up with a system for doing so, and if a new team member accidentally violates it, that must be caught in code review, not by the compiler.

Now here is the key point I have been building up to: there is a strong correlation between a language being dynamically typed and a language also lacking all the other facilities that make lowering the cost of maintaining a large codebase easier, and that is the key reason why it is more difficult to maintain a large codebase in a dynamic language. And similarly there is a correlation between a language being statically typed and having facilities that make programming in the larger easier.

Let's take JavaScript for example. (I worked on the original versions of JScript at Microsoft from 1996 through 2001.) The by-design purpose of JavaScript was to make the monkey dance when you moused over it. Scripts were often a single line. We considered ten line scripts to be pretty normal, hundred line scripts to be huge, and thousand line scripts were unheard of. The language was absolutely not designed for programming in the large, and our implementation decisions, performance targets, and so on, were based on that assumption.

Since JavaScript was specifically designed for programs where one person could see the whole thing on a single page, JavaScript is not only dynamically typed, but it also lacks a great many other facilities that are commonly used when programming in the large:

  • There is no modularization system; there are no classes, interfaces, or even namespaces. These elements are in other languages to help organize large codebases.
  • The inheritance system -- prototype inheritance -- is both weak and poorly understood. It is by no means obvious how to correctly build prototypes for deep hierarchies (a captain is a kind of pirate, a pirate is a kind of person, a person is a kind of thing...) in out-of-the-box JavaScript.
  • There is no encapsulation whatsoever; every property of every object is yielded up to the for-in construct, and is modifiable at will by any part of the program.
  • There is no way to annotate any restriction on storage; any variable may hold any value.

But it's not just the lack of facilities that make programming in the large easier. There are also features that make it harder.

  • JavaScript's error management system is designed with the assumption that the script is running on a web page, that failure is likely, that the cost of failure is low, and that the user who sees the failure is the person least able to fix it: the browser user, not the code's author. Therefore as many errors as possible fail silently and the program keeps trying to muddle on through. This is a reasonable characteristic given the goals of the language, but it surely makes programming in the larger harder because it increases the difficulty of writing test cases. If nothing ever fails it is harder to write tests that detect failure!

  • Code can modify itself based on user input via facilities such as eval or adding new script blocks to the browser DOM dynamically. Any static analysis tool might not even know what code makes up the program!

  • And so on.

Clearly it is possible to overcome these impediments and build a large program in JavaScript; many multiple-million-line JavaScript programs now exist. But the large teams who build those programs use tools and have discipline to overcome the impediments that JavaScript throws in your way:

  • They write test cases for every identifier ever used in the program. In a world where misspellings are silently ignored, this is necessary. This is a cost.
  • They write code in type-checked languages and compile that to JavaScript, such as TypeScript.
  • They use frameworks that encourage programming in a style more amenable to analysis, more amenable to modularization, and less likely to produce common errors.
  • They have good discipline about naming conventions, about division of responsibilities, about what the public surface of a given object is, and so on. Again, this is a cost; those tasks would be performed by a compiler in a typical statically-typed language.

In conclusion, it is not merely the dynamic nature of typing that increases the cost of maintaining a large codebase. That alone does increase costs, but that is far from the whole story. I could design you a language that was dynamically typed but also had namespaces, modules, inheritance, libraries, private members, and so on -- in fact, C# 4 is such a language -- and such a language would be both dynamic and highly suited for programming in the large.

Rather it is also everything else that is frequently missing from a dynamic language that increases costs in a large codebase. Dynamic languages which also include facilities for good testing, for modularization, reuse, encapsulation, and so on, can indeed decrease costs when programming in the large, but many frequently-used dynamic languages do not have these facilities built in. Someone has to build them, and that adds cost.

","6505","","39006","","2014-06-27 23:44:00","2014-06-27 23:44:00","","","","39","","","","CC BY-SA 3.0" "340490","2","","340444","2017-01-19 07:06:46","","4","","

Unused variables make the intent of your code unclear. This is bad because despite appearances, code is predominantly written for people to read, not for computers.

Others have already pointed out that constructing a value and not using it confuses other people who have to read and work with your code. However, in my view the greater danger is to yourself.

An unused variable might be intentional, or it might be an oversight pointing to a defect. For instance, you might have mistyped a name and stored a value in one place when you thought you'd stored it in another. The resulting program could run fine but silently give the wrong result.

Analysing data flow can help you find such errors and many others. Therefore it pays to write your code in such a way that everything a data flow analyser points out as an anomaly is, in fact, a bug. Automatic assistance in preventing bugs is invaluable; many people think ""Oh, I don't need assistance, I would never be that careless"", but so far everyone I've met who thought that was wrong.

","7422","","73508","","2017-01-19 14:51:30","2017-01-19 14:51:30","","","","0","","","","CC BY-SA 3.0" "118259","2","","118249","2011-11-07 08:39:41","","3","","

Reporting is the activity of applying transformations to stored data so that the end user can get information or satisfy business needs such as invoicing, control, planning, etc.

Some examples of basic reports:

  • In a retail system, you get a receipt after making a payment. This recipt is a type of report.

  • In a retail system, the manager may want to pull all the items that will expire in a given data.

  • In an HR system, an HR manager may want to list people who's salaries > 100K

  • Stock items shortage report, shewing items that reached re-order level and must be ordered to satisfy expected customer demand.

  • System failure analysis reports showing time of system failure and associated messages

  • In a banking system, a monthly statement is a report.

The above reports are referred to as 'operational reports' - They are built and used to control the operation of day-to-day business operations. Another type of reports exist to better control the business and make decisions about performance of the business in general and in particular against per-defined KPIs. Such reports pull information from one or more system and produce a 360-view to the management about a subject area or more. These reports are usually refereed to as Business Intelligence Reports For example:

  • Inventory across Regions, states, stores

  • Sales of different goods across Regions, states, stores

  • Customer consumption of goods from organization

There are still more types of reports such as Data Mining reports, I will leave that for you to research if you want.

The report may be as simple as a list or as an aggrigated format with controls and calculations. See for example

Group View Report

Reports, as you can imagine, may be consumed by customer, end-user, analyst, manager, etc. The business analyst defines the suitability of each type of report to each type of users. Managers usually use high level views of information structured in an appealing visual display commonly referred to as Dash Boards.

See this for example:

Dash-1

Dash-3

Dash-2

Reports are not only text. A report can display information in both text and graphical format as a chart or maps.

Reports can be the result of a simple query or the result of a series of integration operations (as is the case of large BI and data warehousing environments).

Tools exist to generate reports using either programming language or specific reporting tools or languages.

End-user tools such as Excel can be used for reporting. In fact Excel provides an advanced type of reporting called pivot tables reports.

See: Pivot Tables in Excel

More advanced tools exist such as:

Advanced Analytics

Reporting can be performed in an on-linen environment or in batch. The subject is so wide to cover here, but I think you get the picture.

","34148","","","","","2011-11-07 08:39:41","","","","0","","","","CC BY-SA 3.0" "340742","2","","340705","2017-01-23 14:35:17","","44","","

You should generally upgrade dependencies when:

  1. It's required
  2. There's an advantage to do so
  3. Not doing so is disadvantageous

(These are not mutually exclusive.)

Motivation 1 (""when you have to"") is the most urgent driver. Some component or platform on which you depend (e.g. Heroku) demands it, and you have to fall in line. Required upgrades often cascade out of other choices; you decide to upgrade to PostgreSQL version such-and-so. Now you have to update your drivers, your ORM version, etc.

Upgrading because you or your team perceives an advantage in doing so is softer and more optional. More of a judgment call: ""Is the new feature, ability, performance, ... worth the effort and dislocation bringing it in will cause?"" In Olden Times, there was a strong bias against optional upgrades. They were manual and hard, there weren't good ways to try them out in a sandbox or virtual environment, or to roll the update back if it didn't work out, and there weren't fast automated tests to confirm that updates hadn't ""upset the apple cart."" Nowadays the bias is toward much faster, more aggressive update cycles. Agile methods love trying things; automated installers, dependency managers, and repos make the install process fast and often almost invisible; virtual environments and ubiquitous version control make branches, forks, and rollbacks easy; and automated testing let us try an update then easily and substantial evaluate ""Did it work? Did it screw anything up?"" The bias has shifted wholesale, from ""if it ain't broke, don't fix it"" to the ""update early, update often"" mode of continuous integration and even continuous delivery.

Motivation 3 is the softest. User stories don't concern themselves with ""the plumbing"" and never mention ""and keep the infrastructure no more than N releases behind the current one."" The disadvantages of version drift (roughly, the technical debt associated with falling behind the curve) encroach silently, then often announce themselves via breakage. ""Sorry, that API is no longer supported!"" Even within Agile teams it can be hard to motivate incrementalism and ""staying on top of"" the freshness of components when it's not seen as pivotal to completing a given sprint or release. If no one advocates for updates, they can go untended. That wheel may not squeak until it's ready to break, or even until has broken.

From a practical perspective, your team needs to pay more attention to the version drift problem. 2 years is too long. There is no magic. It's just a matter of ""pay me now or pay me later."" Either address the version drift problem incrementally, or suffer and then get over bigger jolts every few years. I prefer incrementalism, because some of the platform jolts are enormous. A key API or platform you depend on no longer working can really ruin your day, week, or month. I like to evaluate component freshness at least 1-2 times per year. You can schedule reviews explicitly, or let them be organically triggered by the relatively metronomic, usually annual update cycles of major components like Python, PostgreSQL, and node.js. If component updates don't trigger your team very strongly, freshness checks on major releases, at natural project plateaus, or every k releases can also work. Whatever puts attention to correcting version drift on a more regular cadence.

","55314","","55314","","2017-01-30 15:39:18","2017-01-30 15:39:18","","","","0","","","","CC BY-SA 3.0" "19287","2","","19267","2010-11-16 10:05:19","","101","","

Speaking as someone in the job (who has also been a developer), the key things I have to do are:

  • Keep the development team on track (and happy where possible) - move things out of their way that are stopping them work where possible, explain why it's not possible where they can't be moved to try and reduce any resulting stress (people are more likely to accept things if they at least understand them). Ultimately if there is a conflict between the project and the team that can't be resolved, normally the project will win. That's doesn't necessarily make you popular with the team but you're paid to deliver projects/products, not as a union leader. The obvious skill is in minimising how often this happens.

  • Make sure that the team are communicating with the customer the right amount. This tends to be equal parts keeping the customer away from the team, and making sure the team are asking the customer about things they don't understand fully (rather than just making assumptions which may be incorrect). Developers are very big on making sure that the customer doesn't disturb them and occasionally forget that the customer might have something useful to add.

  • Project planning and prioritisation of resource conflicts, customer demands, support issues and the like. I tend to be the person who says this customer takes priority over that one, or that this bug has to be fixed before it ships but that one can go out as a known issue.

  • Manage the commercial side of development - that is making sure that things that should be charged for and being charged for and that we're not trying to charge for things which should be covered under support.

  • Be the voice of the team in the business and the business within the team - help everyone understand the other's position and help resolve differences where they arise. This largely tends to cover cultural conflicts between the teams needs/wants and the larger organisations, and budget matters. This is actually pretty shitty as it means when there are disagreements you're everyone's enemy.

  • Work with the team to ensure sufficient processes and tools are in place to meet the requirements of the business and customers. Make sure that these processes are being followed and adjusted as needed. Some of this is making sure the team define processes (for instance for technical things they understand better than I do), some is defining them myself (for things I understand better than they do - planning, estimating and so on). The important word here is sufficient - you don't want process for process sake but there are things that have to happen and process is the best way to achieve that consistently.

  • Ensure that every member of the team is working to at least a reasonable level, and ideally beyond that. Work with them to help resolve any issues that are preventing them reaching this level. I'd love to say that my role is making them be the best they can be but while this is true to a degree other demands (project, budget, time) mean that this will almost always be compromised to a greater or lesser extent.

  • Doing all the administration and stuff the organisation (and the law) demand

Overall it's part mentoring, part secretarial, part project management, part account management and part PR (for the team). There's a lot of picking up things the developers don't need to think about or don't think about doing, and some making sure they do things they need to do but don't want to do.

What it's not about is being the best developer (generally you're too hands off to remain current for long so you need to accept that people will know more than you - the skill is in knowing where your longer but outdated experience is more relevant than their shorter but more recent experience) or being some sort of dictator. In that respect the best way to think about it is not that you're more senior, just that you have different responsibilities. Sometimes this will involve making the final call on something (which may go against the views of the team) but more often it should be about consensus or compromise.

","5095","","1204","","2010-11-24 20:32:11","2010-11-24 20:32:11","","","","3","","","","CC BY-SA 2.5" "20239","1","20267","","2010-11-20 13:41:55","","11","306","

I guess most people have been in this situation.

The initial project planning begins. The requirements are outlined. After architectural review and sorting through APIs/Frameworks the fitting technology is picked. The development starts.

And then it starts. As soon as you need to do some supposedly simple supporting things, framework/API start to backfire, and instead of doing any work you end up fighting against the technology. The research time skyrockets, forums are silent, nothing seems to be done, and even when you get something to work, you're not really sure it's done right.

How do you manage in these situations? Do you go for hacks, do you research further, what do you say to management?

","8029","","","","","2010-11-21 00:33:19","How do you deal with over-the-head APIs/technology","","3","2","1","","","CC BY-SA 2.5" "119595","2","","119470","2011-11-14 06:01:03","","1","","

Actually,

it is impossible to fully distinguish between academic level programming and real world programming.

I'd say the biggest difference might be this: in real world programming - you have to know more than programming, and should be able to adapt fast.

Depending for which sector you are doing work, you have to be in compliance with its laws.

Michael only touched the tip of the iceberg by stating programming related tasks, which I would classify as the easy stuff (if you are worth the dough you are being paid).

In general you'll face at least a couple of challenges per subject in an industry:

  • Governing laws (ex. client confidentiality for medical)
  • Subject know-how (ex. invoicing-tax system, inventory, resource management, medical schemes, industry standards)
  • Client requirements that are lacking or non-existant or differing from industry standards/governing laws

If you compare a research phd level programming project vs. a real world one, chances are they are very similar in difficulty, entrance level know-how and such.

The only real difference then is that the real world project

  • has a client
  • has budgets (time, money, people resources)

It's different ball game when someone else makes the rules :)

","40417","","","","","2011-11-14 06:01:03","","","","0","","","2013-11-19 20:37:59","CC BY-SA 3.0" "120152","2","","120139","2011-11-16 23:26:06","","10","","

The people you work with are your best resource for understanding their business needs. Businesses get and keep a competive advantage by doing things differently than their competitors, you won't find that informtion in books. Perhaps the problem is how you ask them or what you ask them.

First, many developers come across as arrogant idiots who think everyone else is stupid. This attitude tends to make people less than cooperative about helping you learn. So first things first, check your attitude and your body language and your tone of voice when talking to the users to get requirements.

Next, find the person who has the most stake in the product you are tasked with developing. Have him or her give you a Power Point presentation on the business needs the application will meet. Tell him that you need a better understanding of what they do in order to help them do it better. This is not in the terms of the requirements but just in terms of an understanding of what the jobs of the users who will be using the system are. Ask if there are regulartory requirements they have to follow. Ask to get a copy of them or a link to them if it is too much to print out.

Ask about industry trends; subscribe to reading lists about your industry (I read the news in my industry, it helps me make suggestions for possible application changes that can keep us ahead of the competition).

If the product contains data that will need to be audited, talk to some auditors about what they need to see and more importantly why they need to see things that way. Learn something about IT auditing, it is an interesting field that will serve you well in developing business applications. Pay particular attention to the concept and practice of internal controls. These are critical in any financial application.

Spend some time observing the actual users right now as they work. Take notes. You will see many things that no one ever thinks to tell you in a meeting. Don't just talk to managers about what needs to be done. Talk to the actual users where-ever possible. Invariably they will perceive the application differently than managers will. It has to work for them though in order for the managers to see the data they want to see. Ask them what problems they have with the current solution. Again they will tell you things the managers would never think to tell you.

Getting back to the managers, ask about reporting needs. Business applications tend to have a data entry need and reporting need. You aren't done until both have been discussed. It does no good to put data into a database if it can't be gotten back out in the way the managers need to see it.

When they tell you something needs to happen that you don't understand, ask for further details. Ask what problem they hope to address with this change. Often you will find that they are suggesting a solution that won't completely solve their problem. In fact always ask for further details, it is a rare user who will tell you everything you need to know without extensive questioning.

Pay attention to edge cases when you talk to them. If they say something needs a managers approval for instance, ask what needs to happen if the manager doesn't approve.

Ask for reading material or websites that talk about the professional needs of the users. Ask for copies of any corporate regulatiosn that affect the application, any laws or government regulations that affect the application and any paper forms that data entry people will be entering data from (It's amazing how much easier it is to enter data when the form and the paper form have the fields in the same order, I was entering voter application data once and the form was firstname lastname and the data entry weas last name first name, imagine how many errors that creates.).

Talk about what information the user needs at his or her fingertips at all times to do a good job. Do the tasks need to be done in a set order or can they skip around?

Finally, sketch out prototypes and take them back to the stakeholders and discuss again. Often people can't visualize very well, help them see how the program will work before you spend any time building it. Do this on paper as a very rough sketch, so they don't think that just because you have a pretty page built that the application is finished. Users think the Interface is the whole application, if it looks finished, it is finished in their minds.

Start building a list of questions to ask that show you are thinking beyond the data entry. Ask about security of the data, how private is the data and who should have access to see or change it. Think about how the application will work over time not just what needs to be there for launch. Do you need admin pages to keep the drop down lists up-to-date as the items to pick from change?

","1093","","1093","","2011-11-17 18:06:12","2011-11-17 18:06:12","","","","0","","","","CC BY-SA 3.0" "120379","2","","120355","2011-11-17 21:12:19","","10","","

I see both sides of this argument, and I realize some rather influential voices (e.g., Fowler) advocate not returning nulls in order to keep code clean, avoid extra error-handling blocks, etc.

However, I tend to side with proponents of returning null. I find there is an important distinction in invoking a method and it responding with I don't have any data and it responding with I have this empty String.

Since I've seen some of the discussion referencing a Person class, consider the scenario where you attempt to look up an instance of the class. If you pass in some finder attribute (e.g., an ID), a client can immediately check for null to see if no value was found. This is not necessarily exceptional (hence not needing exceptions), but it should also be documented clearly. Yes, this requires some rigor on the part of the client, and no, I don't think that's a bad thing at all.

Now consider the alternative where you return a valid Person object... that has nothing in it. Do you put nulls in all of its values (name, address, favoriteDrink), or do you now go populate those with valid but empty objects? How is your client to now determine that no actual Person was found? Do they need to check if the name is an empty String instead of null? Isn't this sort of thing actually going to lead to as much or more code clutter and conditional statements than if we'd just checked for null and moved on?

Again, there are points on either side of this argument I could agree with, but I find this makes the most sense to the most people (making the code more maintainable).

","40834","","","","","2011-11-17 21:12:19","","","","3","","","","CC BY-SA 3.0" "342428","2","","342423","2017-02-17 03:07:59","","6","","

Unless everyone agrees that the story is unnecessary or undesirable, there is usually no need or reason to remove it from the backlog. For the specific concern you raise, I don't see how outright removing a user's story is going to make the user feel more listened to. That said, if it has been decided (ultimately by the Product Owner) that the story will not be worked on (or is deferred to a later ""version"") the story should be marked as appropriate rather than constantly reevaluated. In my experience, you never reach the bottom of the backlog. It's completely normal that a story gets repeatedly deprioritized. This is part of the process of the ""team"" (in the broad sense) figuring out what is actually important. Most of the rest of this answer is really about how to deal the ""risk"" of a user ""disengaging"".

If a user is in a position to ""track"" their story, they are probably (or at least should probably) be able to be involved (at least as a spectator) in the prioritization process. When the story was added they should already have known its rough priority. If the user is actively involved, they presumably have other stories that are getting completed and should have little reason to ""disengage"". If the user is actively involved and all the user's stories are being deprioritized then either 1) this is a problem where features critical for a particular user are being deprioritized because it's unimportant to other users (or worse variants of this), or 2) the project isn't really aimed at that user's concerns and disengaging is probably an appropriate response.

In the (1) case, if the user isn't around to argue the case for why a story should be higher priority, get them involved. As a prelude to this or more generally, most probably in a sprint retrospective meeting but whenever, you should bring up to the team that you feel certain use-cases are being ignored which may potentially produce a system that is unusable for some users. If the user isn't very assertive, you or other team members may need to lend your voices to arguing for prioritization (of course, assuming you agree with the user). Some mild peer-to-peer ""coaching"" may also be useful if you feel equipped to do such a thing, or you can suggest it to a more appropriate person. For example, you may say something along the lines of ""we want to build a system that works for everybody but we're constrained by the stack rank, so you should speak up and help people understand the importance of user story X"". The important points to hit here would be a) explaining the process so the user understands how to work within it, and b) encouraging the user to advocate for stories that they think are important.

Things get more complicated (which is to say more political) if the user is needed for explicit (and to a much lesser but non-trivial extent implicit) buy-in. In this kind of scenario some ""horse-trading"" may be appropriate, though this is usually a bad sign. Ultimately, such a decision would come down to the Product Owner as they are the ultimate arbiter on prioritization. Since the value of software that is scrapped or just not used is quite negative, it can very much make sense to do some ""unimportant"" stories to gain buy-in. This kind of politicking usually happens at organizational levels higher than a developer, but you can definitely bring up concerns that the team may be failing to get buy-in from the users. This goes beyond Agile/Scrum. Agile methodologies try to use transparency and involvement to avoid these problems. In my experience as a developer, within larger organizations I've had to continuously and actively advocate for more involvement by end-users.

","211449","","","","","2017-02-17 03:07:59","","","","2","","","","CC BY-SA 3.0" "121086","2","","121085","2011-11-22 15:46:40","","11","","

I would recommend, whenever possible, having the Product Owner be a representative of the customer. The role of the PO is to be the voice of the customer, to ensure that what the team is producing adds business value, and write and prioritize user stories. Who better to perform these roles than a willing participant from the customer's organization? The key words are willing participant - they need to be willing and able to participate in the Scrum environment if that is the process methodology used by the team.

This isn't unique to Scrum, either. The concept of a customer representative (with an on-site representative often favored) is part of Extreme Programming, as well.

","4","","","","","2011-11-22 15:46:40","","","","2","","","","CC BY-SA 3.0" "224389","1","229057","","2014-01-16 14:18:49","","1","404","

Let me describe my background.

I am currently a software developer co-op working at an architecture firm that has a large focus on research and development, mainly in mathematical modeling and physics simulations in better aid architecture design.

I am one of the three main software developers. Realistically, we're all people that specialize in different topics, for example, one developer is an architect that's been programming and doing mathematical models since he was in high school (he's like 30 years old), and the other architect also has a Masters in data analytics. We all specialize in different things, making us part of a very multi-disciplinary group of nine people in total.

Currently I am working on a project that handles Grasshopper 3D and Rhino 3D programming, making use of the Grasshopper SDK. I really enjoy what I am doing and appreciate the learning opportunity, however, with that said, the nature of Grasshopper programming has been difficult for me.

I don't think I am that poor programming-wise, however, since most of my projects involve extending functionality beyond what the default Grasshopper SDK does, there are often times I have to spend a good couple hours or even days to understand a problem and then apply code to it. The very nature of this ""functionality beyond what the default Grasshopper SDK provides"" means that finding a solution to a problem isn't as simple as Googling, because of the relatively rare resources available, whether it's documentation or help threads online.

The nature of getting stuck on an issue and the only remedy to said problem has been slowly chipping away at my confidence, and I evidently see that I am slowing down in shipping out components (that's what Grasshopper 3D ""tools"" are called).

I've already made a good impression in the first couple months of my co-op in being known as efficient, quick, and comprehensive, however, in my three month review, once I began this task of working on Grasshopper programming, my advisors have noticed that my progress has not been as quick, and they mentioned that they are wondering what/where better to put me to work on that will bring out my full potential.

I like the idea of working on another project, but I do not wish to do so until I've finished the current one I am on, which day by day I feel like I am making snail's pace progress because of some issues I am stuck with where finding help is a problem in itself.

I dislike making a bad impression on my colleagues (and on my school as well, since I am a co-op employee), as I really believe I have the potential to do great. However, this issue of getting stuck on something, making small progresses, then getting stuck again, is simply chipping away my confidence.

What are some things I can do? I feel like once I am stuck on a problem, I find myself closing up, unable to voice it, as if asking for help is a sign of incompetency.

","104659","","31260","","2015-08-19 00:08:15","2015-08-19 00:08:15","Questions about the issue of getting stuck in a problem, and making snail's pace progress","","1","4","1","2015-08-20 01:31:08","","CC BY-SA 3.0" "121802","2","","121798","2011-11-26 06:46:33","","12","","

There are as many different reasons as there are programmers who make choices. Here are some:

  • You just need the software to get work done.

  • If you open source the software, other people will add features they need and you will likely get them for free.

  • Contributing to open source projects is fun, boosts your ego, and may improve your prospects for a job. (Your resume is verifiable, and companies may well find you.)

  • You hope to make money from donations, paid support, or getting paid to add needed features.

  • The base software that would make your job easier requires you to open source the resulting software. Making a version you could keep proprietary would take more effort.

  • You don't have the ability to support software, and if you are going to charge for it, you pretty much have to do that.

  • Money is just not a big motivator for you.

  • You want your efforts to have as great an effect as possible.

  • You have ulterior motives that will make you money. For example, Google's Android is open source, but promotes Google's search, voice, and location services which are highly profitable.

The last two jobs I've held, both with six-figure salaries, were both offered by companies that found me based on my contributions to open source or free projects.

","34200","","","","","2011-11-26 06:46:33","","","","0","","","","CC BY-SA 3.0" "23781","2","","23691","2010-12-04 22:16:31","","45","","

I'm a contractor in the UK

This is different to freelancing in that the term of the engagement with the client tends to be much longer (6 months and more), but I feel that it's worth mentioning this type of working in the context of the question since it is closely related.

I've been contracting for many years and have been fortunate to avoid any significant break in my work. Almost without exception I have worked through an agency (normally a different one for every fresh contract, but I have been able to get further business through some agencies) and I find that arrangement works well for me and them.

Minimal marketing

I don't have to market myself, other than by keeping my CV up to date and participating in various online communities (mainly Stack Exchange and LinkedIn) which I would do anyway.

Keeping current

I do have to keep myself abreast of the latest developments in my field (I specialise in Java web applications based on open source software) but, again, I'd do that anyway. Typically, I work 9am to 5:30pm Monday to Friday and that is almost continuously spent coding.

Getting paid

I don't have to worry about chasing invoices. I have been fortunate that all the agencies I have dealt with have been reputable and prompt in their payments. They in turn rely on the prompt payment from the clients who tend to be medium to large enterprises (>100 employees).

The downsides

I don't want this to turn into an advert so I'll point out some of the downsides to contracting in the UK:

  1. Taxes are higher than they first appear (but less than equivalent permanent employment)
  2. You have to use an accountant to ensure you're operating your business correctly
  3. You must be prepared to travel if you live outside of a big city (travel and hotel costs eat into your daily rate big time)
  4. You are entirely responsible for arranging your retirement (pension, lottery winnings, iPhone app etc)
  5. No holiday/sick pay (you don't turn up for any reason you don't get paid)

Why aren't there more contractors?

Talking to my fellow contractors and various permanent employees the main reasons for not jumping into contracting seem to be (in order of prevalence):

  1. The constant changing of working environment (travel, colleagues, codebase)
  2. The hassle of running a business
  3. Fear of the unknown and the perceived high risk nature of it

Recent research regarding freelancers

In accordance with the edited question, here are some recent studies that examine the question of freelancers in terms of their presence in the labour market.

Professional Contractors Guild (UK)

Do you think the number of freelancers will rise of the next 10 years? (Poll)

...Dr Bellini’s lecture touched on a number of themes, attempting to map out the freelance landscape over the coming years. He predicts that there will be a fundamental shift in working patterns, marking the death of the 9-5 and rise of the freelancer. Dr Bellini believes the size of the freelance marketplace will double over the next 10 years.

Prime Minister praises UK freelancers (includes cited figures)

He went on to state: “The 1.4 million freelancers in our country make a massive contribution to our economy. More and more people are choosing freelancing, recognising that it strikes the right balance between work and life in the 21st century, and as we go for economic growth this Government is getting right behind them.”

Analytica (Peer reviewed journal on division of labour)

Getting the message: Communications workers and global value chains (PDF) by Catherine McKercher and Vincent Mosco

They are part of the growing ranks of freelance computer programmers and other high-tech workers, who move from contract to contract and from employer to employer. Caraway notes that, in the current economic downturn, ‘flexibility, or the ability of capitalists to mobilise and demobilise labour on demand, has taken on a new significance – and a new air of inevitability’

These articles all point to a general trend of increasing numbers of freelance workers who wish, or are being forced, to change their working patterns.

","7167","","7167","","2010-12-07 20:55:08","2010-12-07 20:55:08","","","","0","","","2011-01-13 23:31:13","CC BY-SA 2.5" "122324","2","","122312","2011-11-29 15:33:20","","4","","

Although Scrum, as it's defined by Sutherland and Schwaber, doesn't object to the combination of Product Owner and development team member, it sounds like that is problematic in this particular case. The primary responsibility of the person who is designated as Product Owner is to be the voice of the customer, performing tasks such as writing user stories, prioritizing the stories, and generally managing the product backlog.

As a Scrum Master, you should not be shielding the Product Owner from the outside. It is the job of the Product Owner to interface with any clients or users of the system to create and prioritize the requirements. In fact, the point of a Product Owner (and the Scrum Master) is to shield the Development Team from stakeholders so they can focus on designing, developing, and testing the system.

It sounds like, on this particular project, the job of the Product Owner is a full-time responsibility. As such, the Product Owner should be removed from the development team. If, during a sprint, there is sufficient time for the Product Owner to contribute to design, development, or testing, that's good, as long as you track the increase in human resources so you don't skew your velocity. However, the team shouldn't be counting on the Product Owner to be a contributing member of the development team.

","4","","4","","2017-11-16 11:50:24","2017-11-16 11:50:24","","","","2","","","","CC BY-SA 3.0" "122686","2","","122638","2011-12-01 08:55:37","","6","","

You can, of course, charge for anything that you and the client agree to. Whether you should or not, however, really depends.

You state that the company you're doing the work for is ""local"". Many times, companies seek out local contractors over remote freelancers precisely because it makes it easier to physically meet at least occasionally over the course of a longer project. Many local contractors use this when pitching their services to the company. If one of your pitches has been that you're a local developer, it would be counterproductive to turn around and negate that benefit by making it difficult for the customer to meet with you face to face.

Continuing on the ""local"" front, exactly how far away are your offices from the client's offices, both in time and distance. If ""local"" means ""same city"", for example, charging extra for the minimal inconvenience of potentially driving across town would likely come across as petty if you're closer than most of the client's actual staff. On the other hand, if ""local"" means ""close enough to drive rather than to fly"" so that coming in for a week means a multi-hour drive, renting a hotel, etc., then it's much more reasonable to charge for that inconvenience either in the form of actual expenses or billing for travel time or upping the hourly rate.

You say that you've done work for them in the past so, presumably, there is at least a decent possibility of doing more work for them in the future. With that in mind, it may well make sense to use this as an opportunity to build some goodwill at relatively little actual cost to you. If coming to their office for a week doesn't cost you much out of pocket, just a couple extra hours of commuting time, you may be much better off in the long run telling the client that you'd normally charge X for working on-site but since they're a good customer, you'll credit the invoice for that (so that the invoice shows a charge for X and a credit for X). That sort of informal ""customer loyalty program"" is generally a great investment in ensuring that you stay in everyone's good graces without costing you much out of pocket. It certainly helps when there are future issues with bills to be able to point out that you've proactively given them discounts in the past. Plus, physically meeting people, putting names to faces, maybe grabbing a bite to eat at lunch or after work can make the working environment much better so it may be in your interest to spend the occasional week on-site with key clients.

Of course, the bigger the request actually is, the more likely it makes sense to charge for it. If you're running up a thousand dollars in hotel bills, travel expenses, and meals away from home in order to work from the client site for a week, it's perfectly reasonable to charge for that. If you're doing a fixed price bid and working from the client site is going to slow you down because you're spending time getting your equipment to work in their environment or configuring their machine with your preferred development environment, it makes sense to build in those estimates for the work. If you're charging an hourly rate, on the other hand, it makes much less sense to charge for the fact that they're asking you to be less productive for a bit.

","13597","","","","","2011-12-01 08:55:37","","","","1","","","","CC BY-SA 3.0" "225945","2","","225835","2014-01-29 15:35:46","","1","","

Most of your points are very obvious and make sense no matter what type of project you're working on. Speaking from a Sass standpoint, rather than a pure CSS standpoint, some of them make very little sense.

#5 (consider the resulting css file size)

This is a major point that most people don't quite grasp when it comes to using @extend in Sass. If your mixin is generating a lot of code that can be shared with other selectors, then @extend is a good idea. If you have lengthy selectors or the code being shared is rather short, using @extend is going to lead to a larger CSS file (see: https://codereview.stackexchange.com/questions/26003/mixin-extend-or-silent-class/27910#27910)

#6 (more than one property per line)

This is purely a style preference and has no impact on the code's performance or reusability. In my opinion, this only reduces readability of the source. The generated CSS is going to be formatted according to the chosen output style, so the format of the source is irrelevant (unless you're using SASS syntax where whitespace/indentation matters).

Some of this applies to some of your other formatting specific points, too.

#9 (Only use classes for styles)

If your goal is to be reusable, then you should never use classes. One of the more commonly asked questions regarding libraries like Bootstrap are ""how do I use X with only semantic names"" or ""how do I get the code for X from this module without also importing the code for Y"". Instead, make good mixins and functions so that users can compose them in ways that make sense for their project.

#10 (separate set of javascript specific classes)

Form follows function. If what you've got is a toggle, then there is a difference in styling between the opened and closed states. It does not make sense to make another class just for that.

There are other properties worth considering which may make more sense (both from the JavaScript and CSS side) such as disabled, checked or the use of data-* attributes (eg. data-state=""opened"").

","110864","","-1","","2017-04-13 12:40:37","2014-01-29 15:35:46","","","","2","","","","CC BY-SA 3.0" "123059","1","","","2011-12-03 03:38:18","","9","2232","

See https://softwareengineering.stackexchange.com/questions/109817/superior-refusing-to-use-subversion

My question is similar, but here are the main differences in my scenario:

  • We are starting a new project from scratch, using PHP and web tech. There would be no down time in development as we would be adopting it from the beginning, if I have my way.

  • My dev team consists of me, and my boss. We are the ""IT"" Department of a relatively small firm.

The web app will replace a legacy application with absolutely no source control. Due to variations in geographical legal requirements, the decision was made (before I was hired) to fork the app into 7 completely separate directories for each version. Different developers did different things in different places at different times after that. Patching changes across them, well, I think it could be done better, I guess that's why I'm posting.

My boss's proposal, directly pasted from an email:

  • Updates should be submitted as packages in the SUBMISSIONS folder. The package should contain all relevant files as well as an ‘UPDATE.NFO’ file that contains a description of the update, a list of all new files included (with descriptions), and a list of all modified files with modification details.

  • Update packages should focus on an individual element and not stray from its intended purpose. Code should be designed to be modular and reusable whenever possible.

  • All submitted packages should be installed in each developer’s test environment soon after submission. Each developer is to review the new addition and voice any concerns over its installation to the production environment. A standard package update should be held for a minimum of 3 business days for this review process before being loaded into the production environment. High priority updates/fixes can skip this requirement.

The reason source control was invented is to make all that automatic, right? I suggested subversion, because that's what I used in college. Boss doesn't like subversion because ""It makes a mess of the code"" (i.e. uses binary magic and is not plainly readable). We did try it one time, but I think trying to use it on windows made weird lower/uppercase errors and we couldn't check out our files. I don't know whether it's only subversion, or all source control products that are objectionable.

So, what kind of argument should I make to my boss? Or is he right, and there could be a danger of losing all our work from some weird bug?

Or am I wrong altogether? Is source control really necessary in my situation? This is our main business-critical software we're talking about, so it will end up huge no doubt. But there's only 2 developers (now).

Additionally, If I can't convince him, would there be any point to me using it only for myself? I am speaking as someone with very limited experience actually using svn; all I really know is checkout and commit. What are the features of source control (may include other products than svn) that would aid in my individual development effort?

Please no ""get another job"" comments. That's not helpful to the debate.

","42069","","155433","","2019-02-25 15:06:19","2019-02-25 18:51:30","Boss is skeptical of using a version control system for new project, should I anyway?","","5","22","2","","","CC BY-SA 4.0" "226375","2","","226361","2014-02-01 18:39:55","","2","","

Your question is exactly the reason I believe that just because you (try to) practice Scrum or other form of Agile, it doesn't mean that there shouldn't be a technical team lead (aka benevolent dictator) on the project.

In my past experience, we had a project, where management came down and simply stated ""no more management. No hierarchy. The team is responsible. Go"" We had a number of strong voices (some of which came from guys with only 1 year of experience in project and language they were in) on a relatively large team and most meeting degenerated into arguments, some of which were for rather silly reasons. (can you imaging a 1.5 hour discussion if unit test project naming convention should be test_[projectname] or [projectname]_test or....)

So in reviewing with management how that team structure just didn't work for us, my proposal is that some hierarchy is not a bad thing. For the next project, he gave me the label of a design lead (with basically dictator-like powers) and in a 1.5 years that it took us to complete the next release, I think I had to exercise those powers only few times and only towards the beginning of the project.

My role as a design lead and one of the scrum members was to be part of the team. We still had discussions and I welcomed/encouraged input. So in reality I only had to step in few times when a consensus couldn't be reached. And I think because our meetings stayed productive and there was no yelling or arguing to the point where half the team just wants to leave, over time we as a whole got more aligned on the same page where the necessity for those dictator-like powers mostly went away.

The other thing that I think helped was to break up large development team into smaller teams of 3-5 people so that those strong voices would each have a chance to naturally lead without stepping over each other.

","20673","","","","","2014-02-01 18:39:55","","","","1","","","","CC BY-SA 3.0" "25459","2","","25432","2010-12-11 04:37:12","","189","","

Did I ever tell you about Ashton?

Ashton was your classic corn-fed farm boy. His parents had been hippies who never really managed to get their acts together until his mother inherited 15 acres in a rural part of Michigan. The family moved out there, bought a couple of dairy goats, and struggled to make a living selling organic goat cheese to the yuppies at the Ann Arbor Farmer’s Market.

From the time he was ten years old, Ashton had to wake up every morning at 4:00 a.m. and milk those damn goats, and it was exhausting. Ashton loved going to school because it meant he wasn’t working knee-deep in goat poop. Throughout high school, he studied his ass off, hoping that a scholarship to a good university would be his ticket out of the farm. He found college to be so much easier than farm life that he didn’t understand why everyone else didn’t get straight A’s like him. He majored in Software Engineering because he couldn’t imagine engineers ever being required to wake up at 4:00 a.m.

Ashton graduated from school without knowing much about the software industry, really, so he went to the career fair, applied for three jobs, got accepted by all three, and picked the one that paid the most: something insane like $32,000 a year, working at a big furniture company in the southwestern part of the state that manufactured cubicle farms for corporations all over the world. He never wanted to see a farm again, so he was determined to make a good impression on his boss, Charlie Sherman.

“That’s not going to be easy,” his cubicle-mate, Jeff, said. “She’s something of a legend here.”

“What do you mean?” he asked.

“Well, you remember a few years ago, when there was all that uproar about Y2K?”

Ashton was probably too young. “Y2K?”

“You know, nobody expected that all the old computer programs written in the 1960s would still be running in 2000, so they only had room for two digits for the year. Instead of storing 1999, they would store 99. And then when the year flipped over on January 1st, 2000, the computer systems crashed, because they tried to fit “100” in two digits.

“Really? I thought that was a myth,” Ashton said.

“At every other company in the world, nothing happened,” Jeff said. “They spent billions of dollars checking every line of code. But here, of course, they’re cheap bastards, so they didn’t bother doing any testing.”

“Not at all?”

“Zilch. Zero testing. Nada. And lo and behold, when people staggered back into work on January 2nd, not a single thing worked. They couldn’t print production schedules. They couldn’t get half of the assembly lines to even turn on. And nobody knew what shifts they were supposed to be working. The factory literally came to a standstill.”

“You’re kidding,” Ashton said.

“I shit you not. The factory was totally silent. Now, Charlie, she was new then. She had been working at Microsoft, or NASA, or something... nobody could figure out why someone like her would be working in our little armpit of a company. But she sat down, and she started coding. And coding. And coding.

“Charlie coded for nine days straight. Nine days without sleeping, without eating, some people even claimed she never went to the bathroom. She went from system to system and literally fixed all of them. It was something to behold. My God, there were COBOL systems in there that needed to be fixed. The whole factory at a standstill, and Charlie is sending people to the university library in Ann Arbor to find old COBOL manuals. Assembly-line workers are standing around shivering, because even the thermostats had a Y2K bug. And Charlie is drinking cup after cup of coffee and typing like a madwoman.”

“Wow. And she never went to the bathroom?”

“Well, that part might be a little bit of an exaggeration. But she really did work 24 hours for nine days straight. Anyway, on January 11th, about five minutes before the day shift is supposed to start, she comes out of her cubicle, goes to the line printer, hits a button, and boom! out comes the production schedules, and the team schedules, and everything is perfect, perfectly formatted, using a slightly smaller font so that the “2000” fits where it used to say “99,” and she’s even written a new priority optimizing system that helps them catch up with 9 days of missed production without pissing off too many customers, and all the assembly lines start running like nothing was ever wrong, and the heat comes on, and the invoices come out printed with ‘2000’ as the year instead of ‘19100,’ and after that day, nobody found a single bug.”

“Oh come on!” Ashton says. “Nobody writes code without bugs.”

“She did. I saw it with my own eyes. The first day back they ran two days worth of cubicles without a hiccup.”

Ashton was dumbstruck. “That’s epic. How can I live up to that?”

“You can’t, buddy, nobody can,” Jeff said, turning back to his computer terminal, where he resumed an online flame war over who would win in a fight, Spock or Batman, which had been raging for over four months.

Not one to give up, Ashton swore he would, one day, do something legendary. But the truth is, there never was another Y2K. And nobody, in that part of Michigan, gave a rat’s ass about good programming. There was almost nothing for the programmers to do, in fact. Ashton got dumb little projects assigned to him... at one point he spent three weeks working on handling a case where the sales tax in one particular county was wrong because some zip code spanned two different sales tax zones. The funny thing was, it was in some unpopulated part of New York State where nobody ever bought office cubicles, and they had never had a customer there, so his code would never run.

Ever.

For two years Ashton came into work enthusiastic and excited, and dying to make a difference and do something terrific and awesome, while his coworkers surfed the Internet, sent instant messages to their friends, and played computer solitaire for hours.

Jeff, his cubicle-mate, only had one responsibility: updating the weekly Excel spreadsheet indicating how many people were hurt on the job that week. Nobody ever was. Once a week, Jeff opened the spreadsheet, went to the bottom of the page, entered the date and a zero, hit save, and that was that.

Ashton even wrote a macro for Jeff that automated that one task. Jeff didn’t want to get caught, so he refused to install it. They weren’t on speaking terms after that. It was awkward.

On the morning of his two year anniversary at the cubicle company, Ashton was driving to work when he realized something.

Not one line of code that he had written had ever run.

Not one thing he had done in two years of work made any impact on the world.

And it was fucking 24 degrees in that part of Michigan, and it was gray, and smelly, and his Honda was a piece of crap, and he didn’t have any friends in town, and nothing he did mattered.

As he drove down Lincoln Avenue, he saw the furniture company ahead on the left. Three flags fluttered in front of the corporate campus: an American flag, a flag of the great state of Michigan, and a white and red flag with the company logo. He got in the turning lane behind a long line of cars waiting to turn left. It always took four or five traffic light cycles, at rush hour, to make the turn, so Ashton had plenty of time to try to remember if any code he had ever written was ever used by anyone.

And it hadn’t. And he fought back a tear.

And instead of turning left, he went straight, almost causing an accident because he forgot that the left turn light didn’t mean you could go straight.

And he drove right down Lincoln Avenue, and got onto the Gerald Ford freeway, and he just kept driving until he got to the airport over in Grand Rapids, and he left his crappy old Honda out right in front of the terminal, knowing perfectly well it would be towed, and didn’t even close the car door, and he walked right up to the Frontier Airlines counter and he bought himself a ticket on the very next flight to San Francisco, which was leaving in 20 minutes, and he got on the plane, and he left Michigan forever.

","30","","","","","2010-12-11 04:37:12","","","","40","","","","CC BY-SA 2.5" "123608","2","","122629","2011-12-06 19:15:19","","4","","

I've been working on distributed teams for the past 6 years. While previous projects have struggled to integrate remote team members, my current project has hit a sweet spot, and has had great success by following the following principles...

For voice communication:

  • A high quality (Skype) audio call and text chat, is left open with the whole team connected all day. At times nothing is being said, but this facilitates the ""over-the-cube"", watercooler, random gathering, etc. discussions that frequently occur in a physical office.
  • Avoid speakerphones, cell phones, etc whenever possible. The limited bandwidth and call quality is a distraction and makes communication more difficult.
  • Any time the team needs to call someone without Skype, the entire call is joined to that person's phone
  • One-on-one or sidebar discussions take place on separate Skype calls, as needed.

For collaboration:

  • The Kanban board in an electronic project management tool, which replaces sticky notes on the wall
  • Rapid wireframing tools replace whiteboards
  • Remote desktop sharing tools are used for things like pairing, sharing, and mutual troubleshooting
  • Wikis are used for documentation
  • Daily meetings, reviews, planning meetings are all conducted via remote desktop sharing software.

By doing the above, our experience has been that there is little to no negative impact to the entire team being remote. In fact we've seen productivity gains where team members have the best of both worlds, with the ability to hit the mute button to have ""quiet hours"" a few minutes to focus on a problem (while still keeping track of the text chat), and then join back to the group audio when finished. You get rapid voice communication like in the office when needed, but you can also have a few quiet moments to focus when you need to.

","34605","","","","","2011-12-06 19:15:19","","","","1","","","","CC BY-SA 3.0" "123885","2","","123883","2011-12-07 23:40:03","","3","","

Broken commits are something that ""just happens"", shouldn't mean the end of the world. I do have a little nagging voice in the back of my head that tells me one shouldn't knowingly check in broken code, as a matter of principle and therefore including historical versions, however it's not something I'd go to war over.

Git's much praised branching model makes it feasible to keep broken commits off specific branches, e.g. if your team adopts gitflow or a simplified version thereof. Anything with a ""clean master"" policy. In this case, you could check in the final, working version as a merge commit, where the (broken) historical versions are available in the repository but off the main line.

If your team hasn't adopted such a branching model, then you have a valid excuse to just push the whole lot to master and be done with it.

","7250","","","","","2011-12-07 23:40:03","","","","0","","","","CC BY-SA 3.0" "344983","2","","344980","2017-03-27 05:53:09","","2","","

In general most sound processing works like other natural language processing in that one of the first steps is to slice your data into basic tokens, i.e. words - in human sound processing we split the words based on the silence between them. Accordingly you can pre-process to:

  1. Filter out sound outside of then normal, significant, speech bandwidth, this is what telephone companies do to save bandwidth.
  2. Split each sample into chunks based on the gaps.

This is the equivalent of the visual deep learning systems standardising the size and bit depth of the images.

With some people, who run their words into each other, the software will have some problems but so would most listeners.

","128719","","","","","2017-03-27 05:53:09","","","","3","","","","CC BY-SA 3.0" "124574","2","","122608","2011-12-12 09:38:10","","174","","

For context, I'm a Clang developer working at Google. At Google, we've rolled Clang's diagnostics out to (essentially) all of our C++ developers, and we treat Clang's warnings as errors as well. As both a Clang developer and one of the larger users of Clang's diagnostics I'll try to shed some light on these flags and how they can be used. Note that everything I'm describing is generically applicable to Clang, and not specific to C, C++, or Objective-C.

TL;DR Version: Please use -Wall and -Werror at a minimum on any new code you are developing. We (the compiler developers) add warnings here for good reasons: they find bugs. If you find a warning that catches bugs for you, turn it on as well. Try -Wextra for a bunch of good candidates here. If one of them is too noisy for you to use profitably, file a bug. If you write code that contains an "obvious" bug but the compiler didn't warn about it, file a bug.

Now for the long version. First some background on warning flag groupings. There are a lot of "groupings" of warnings in Clang (and to a limited extent in GCC). Some that are relevant to this discussion:

  • On-by-default: These warnings are always on unless you explicitly disable them.
  • -Wall: These are warnings that the developers have high confidence in both their value and a low false-positive rate.
  • -Wextra: These are warnings that are believed to be valuable and sound (i.e., they aren't buggy), but they may have high false-positive rates or common philosophical objections.
  • -Weverything: This is an insane group that literally enables every warning in Clang. Don't use this on your code. It is intended strictly for Clang developers or for exploring what warnings exist.

There are two primary criteria mentioned above which guide where warnings go in Clang, and let's clarify what these really mean. The first is the potential value of a particular occurrence of the warning. This is the expected benefit to the user (developer) when the warning fires and correctly identifies an issue with the code.

The second criteria is the idea of false-positive reports. These are situations where the warning fires on code, but the potential problem being cited does not in fact occur due to the context or some other constraint of the program. The code warned about is actually behaving correctly. These are especially bad when the warning was never intended to fire on that code pattern. Instead, it is a deficiency in the warning's implementation that causes it to fire there.

For Clang warnings, the value is required to be in terms of correctness, not in terms of style, taste, or coding conventions. This limits the set of warnings available, precluding oft-requested warnings such as warning whenever {}s are not used around the body of an if statement. Clang is also very intolerant of false-positives. Unlike most other compilers it will use an incredible variety of information sources to prune false positives including the exact spelling of the construct, presence or absence of extra '()', casts, or even preprocessor macros!

Now let's take some real-world example warnings from Clang, and look at how they are categorized. First, a default-on warning:

% nl x.cc
     1  class C { const int x; };

% clang -fsyntax-only x.cc
x.cc:1:7: warning: class 'C' does not declare any constructor to initialize its non-modifiable members
class C { const int x; };
      ^
x.cc:1:21: note: const member 'x' will never be initialized
class C { const int x; };
                    ^
1 warning generated.

Here no flag was required to get this warning. The rationale is that this is code is never really correct, giving the warning high value, and the warning only fires on code that Clang can prove falls into this bucket, giving it a zero false-positive rate.

% nl x2.cc
     1  int f(int x_) {
     2    int x = x;
     3    return x;
     4  }

% clang -fsyntax-only -Wall x2.cc
x2.cc:2:11: warning: variable 'x' is uninitialized when used within its own initialization [-Wuninitialized]
  int x = x;
      ~   ^
1 warning generated.

Clang requires the -Wall flag for this warning. The reason is that there is a non-trivial amount of code out there which has used (for good or ill) the code pattern we are warning about to intentionally produce an uninitialized value. Philosophically, I see no point in this, but many others disagree and the reality of this difference in opinion is what drives the warning under the -Wall flag. It still has very high value and a very low false-positive rate, but on some codebases it is a non-starter.

% nl x3.cc
     1  void g(int x);
     2  void f(int arr[], unsigned int size) {
     3    for (int i = 0; i < size; ++i)
     4      g(arr[i]);
     5  }

% clang -fsyntax-only -Wextra x3.cc
x3.cc:3:21: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
  for (int i = 0; i < size; ++i)
                  ~ ^ ~~~~
1 warning generated.

This warning requires the -Wextra flag. The reason is that there are very large codebases where mis-matched sign on comparisons is extremely common. While this warning does find some bugs, the probability of the code being a bug when the user writes it is fairly low on average. The result is an extremely high false-positive rate. However, when there is a bug in a program due to the strange promotion rules, it is often extremely subtle making this warning when it flags a bug have relatively high value. As a consequence, Clang provides it and exposes it under a flag.

Typically, warnings don't live long outside of the -Wextra flag. Clang tries very hard to not implement warnings which do not see regular use and testing. The additional warnings turned on by -Weverything are usually warnings under active development or with active bugs. Either they will be fixed and placed under appropriate flags, or they should be removed.

Now that we have an understanding of how these things work with Clang, let's try to get back to the original question: what warnings should you turn on for your development? The answer is, unfortunately, that it depends. Consider the following questions to help determine what warnings work best for your situation.

  • Do you have control over all of your code, or is some of it external?
  • What are your goals? Catching bugs, or writing better code?
  • What is your false-positive tolerance? Are you willing to write extra code to silence warnings on a regular basis?

First and foremost, if you don't control the code, don't try turning extra warnings on there. Be prepared to turn some off. There is a lot of bad code in the world, and you may not be able to fix all of it. That is OK. Work to find a way to focus your efforts on the code you control.

Next, figure out what you want out of your warnings. This is different for different people. Clang will try to warn without any options on egregious bugs, or code patterns for which we have long historical precedent indicating the bug rate is extremely high. By enabling -Wall you're going to get a much more aggressive set of warnings targeted at catching the most common mistakes that Clang developers have observed in C++ code. But with both of these the false-positive rate should remain quite low.

Finally, if you're perfectly willing to silence false-positives at every turn, go for -Wextra. File bugs if you notice warnings which are catching a lot of real bugs, but which have silly or pointless false positives. We're constantly working to find ways to bring more and more of the bug-finding logic present in -Wextra into -Wall where we can avoid the false-positives.

Many will find that none of these options is just-right for them. At Google, we've turned some warnings in -Wall off due to a lot of existing code that violated the warning. We've also turned some warnings on explicitly, even though they aren't enabled by -Wall, because they have a particularly high value to us. Your mileage will vary, but will likely vary in similar ways. It can often be much better to enable a few key warnings rather than all of -Wextra.

I would encourage everyone to turn on -Wall for any non-legacy code. For new code, the warnings here are almost always valuable, and really make the experience of developing code better. Conversely, I would encourage everyone to not enable flags beyond -Wextra. If you find a Clang warning that -Wextra doesn't include but which proves at all valuable to you, simply file a bug and we can likely put it under -Wextra. Whether you explicitly enable some subset of the warnings in -Wextra will depend heavily on your code, your coding style, and whether maintaining that list is easier than fixing everything uncovered by -Wextra.

Of the OP's list of warnings (which included both -Wall and -Wextra) only the following warnings are not covered by those two groups (or turned on by default). The first group emphasize why over-reliance on explicit warning flags can be bad: none of these are even implemented in Clang! They're accepted on the command line only for GCC compatibility.

  • -Wbad-function-cast
  • -Wdeclaration-after-statement
  • -Wmissing-format-attribute
  • -Wmissing-noreturn
  • -Wnested-externs
  • -Wnewline-eof
  • -Wold-style-definition
  • -Wredundant-decls
  • -Wsequence-point
  • -Wstrict-prototypes
  • -Wswitch-default

The next bucket of unnecessary warnings in the original list are ones which are redundant with others in that list:

  • -Wformat-nonliteral -- Subset of -Wformat=2
  • -Wshorten-64-to-32 -- Subset of -Wconversion
  • -Wsign-conversion -- Subset of -Wconversion

There are also a selection of warnings which are more categorically different. These deal with language dialect variants rather than with buggy or non-buggy code. With the exception of -Wwrite-strings, these all are warnings for language extensions provided by Clang. Whether Clang warns about their use depends on the prevalence of the extension. Clang aims for GCC compatibility, and so in many cases it eases that with implicit language extensions that are in wide use. -Wwrite-strings, as commented on the OP, is a compatibility flag from GCC that actually changes the program semantics. I deeply regret this flag, but we have to support it due to the legacy it has now.

  • -Wfour-char-constants
  • -Wpointer-arith
  • -Wwrite-strings

The remaining options which are actually enabling potentially interesting warnings are these:

  • -Wcast-align
  • -Wconversion
  • -Wfloat-equal
  • -Wformat=2
  • -Wimplicit-atomic-properties
  • -Wmissing-declarations
  • -Wmissing-prototypes
  • -Woverlength-strings
  • -Wshadow
  • -Wstrict-selector-match
  • -Wundeclared-selector
  • -Wunreachable-code

The reason that these aren't in -Wall or -Wextra isn't always clear. For many of these, they are actually based on GCC warnings (-Wconversion, -Wshadow, etc.) and as such Clang tries to mimic GCC's behavior. We're slowly breaking some of these down into more fine-grain and useful warnings. Those then have a higher probability of making it into one of the top-level warning groups. That said, to pick on one warning, -Wconversion is so broad that it will likely remain its own "top level" category for the foreseeable future. Some other warnings which GCC has but which have low value and high false-positive rates may be relegated to a similar no-man's-land.

Other reasons why these aren't in one of the larger buckets include simple bugs, very significant false-positive problems, and in-development warnings. I'm going to look into filing bugs for the ones I can identify. They should all eventually migrate into a proper large bucket flag or be removed from Clang.

I hope this clarifies the warning situation with Clang and provides some insight for those trying to pick a set of warnings for their use, or their company's use.

","42763","","362018","","2021-07-21 13:36:49","2021-07-21 13:36:49","","","","8","","","","CC BY-SA 4.0" "125400","2","","125376","2011-12-15 14:47:37","","5","","

It would be best if you embraced the idea of continuous improvement.

As far as team size goes, Scrum calls for a team size of no more than 11 people, with up to 9 people on the Development Team. There are some roles that you need to fill, as well. You're going to need a Scrum Master who is not part of the development team, a Product Owner who can be the voice of the customer and maintain the product backlog, and you're going to need at least one or two developers, so there's a team of 3-4 people right there. Depending on the project and teams involved, you might need some other people - graphic designers, usability experts, system administrators, quality specialists, marketing and sales - who might not be integrated with the development team.

For the number of stories per sprint, this will normalize over time. Prioritize your backlog, then take what your team considers a reasonable number of stories from the product backlog for the sprint backlog. After your first sprint, you have a velocity that can be used to compute the number of stories for future sprints. Compute your velocity using story points finished, the sprint's duration, and your human resource utilization. As people leave or are added to the team, you adjust your sprint length, or people have their time split between multiple projects, pick the number of stories appropriately. Address issues during your sprint retrospectives.

The Scrum Guide says that sprints are less than 1 month in length and are measured in weeks. 2-3 weeks are typical. If that doesn't work for you, adjust. Be sure to plan your sprint accordingly, in terms of available resources over the course of the sprint, the duration, and previous velocities.

","4","","4","","2020-10-30 20:56:51","2020-10-30 20:56:51","","","","5","","","","CC BY-SA 4.0" "28377","1","28398","","2010-12-20 20:31:26","","10","620","

I recently worked on a prototype of a new system using Sketchflow, and now some key stakeholders are pushing for a sketched look & feel in the final product. To make matters worse, the people viewing the prototype were asked to provide feedback on the sketched look & feel by way of a fairly leading survey question, to which 80% of the responses were positive.

It's for an enterprise application that will mostly be for internal clients, but is also intended to be used by some external clients.

The main reasons I have for it thinking it's a bad idea are:

  • It doesn't look polished or professional
  • The significant effort involved in skinning an actual application in the sketched style
  • The sketched style doesn't make efficient use of screen real-estate

I've been trying to figure out what the appeal could possibly be, and the only thing I can come up with is that people are attracted to the simplicity of it -- especially when compared with the existing system(s) it will be replacing.

Can anyone point me in the direction of evidence of why using a sketch look & feel is a bad idea? Ideally something based on UI research. I'm worried that my voice isn't going to be listened to unless I can point to something concrete.

[Edit]

I should probably add that the difficulty of skinning the application is compounded by the fact that it is intended to be delivered as one or more Silverlight webparts in a Sharepoint site. Getting a consistent sketched look & feel across both technologies could be very difficult.

[Edit]

Just in case anyone's not sure about what a Sketchflow prototype looks like (and therefore what the people in question are asking for) they're essentially asking for a production application to have a pencil-sketch wireframe look & feel.

","11086","","11086","","2010-12-20 23:07:02","2010-12-21 11:25:00","How do I convince my users not to use a sketch UI?","","7","7","1","","","CC BY-SA 2.5" "28606","2","","28582","2010-12-21 11:23:21","","1","","

We don't have specific exception handling rules for our team, except the usual ones: don't use exceptions for 'normal' behavior, don't just silently swallow exceptions, etc.

Static ananlysis may help you catch some of those (depending on the language you use) but you'll be safer with code reviews.

","602","","","","","2010-12-21 11:23:21","","","","0","","","","CC BY-SA 2.5" "29084","2","","29062","2010-12-22 12:08:18","","43","","

I have come to realize that SMART goals are best used when people have a deficiency they need to correct, and are not so good for times you want people to grow or go from good to great. If someone is not doing timesheets, for example, and this is hurting the company because you sometimes have to delay invoicing, you could have a smart goal like ""over the next 6 weeks, at least 5 weeks' timesheets will be completed by 10am of the next Monday morning."" 6 weeks later you have a true or false; the developer made it or missed it. Either the new habit is in place or you get to decide if you want to still employ someone who doesn't mind delaying your invoicing. Works for people who have other bad habits too: ""over the next two weeks, at least 75% of your checkins will have a checkin comment that follows the checkin guidelines (link to internal document)."" Again you have a nice crisp did/didn't at the end of that short time.

Where I find these constructs less helpful is when the timeframe lengthens, when the achievement you want is fuzzy (learn a language, be more helpful), or when it's ok if the goal is not achieved (you may value certifications, but if someone failed their test you probably wouldn't take disciplinary action.) Suddenly all the benefits of the smart goal fall away. Don't try to use them for anything other than corrective actions, and they're easy to write, they help the developer get up to the expected level, and they're easy to test for when the time's up. Having trouble writing them means they're not the right tool for this goal.

","285","","","","","2010-12-22 12:08:18","","","","4","","","","CC BY-SA 2.5" "230202","2","","229479","2014-02-25 00:39:27","","2","","

Let me ask you a completely serious counter-question: What, in your view, is the difference between ""data"" and ""code""?

When I hear the word ""data"", I think ""state"". Data is, by definition, the thing that the application itself is designed to manage, and therefore the very thing that the application can never know about at compile time. It is not possible to hard-code data, because as soon as you hard-code it, it becomes behaviour - not data.

The type of data varies by application; a commercial invoicing system may store customer and order information in a SQL database, and a vector-graphics program might store geometry data and metadata in a binary file. In both of these cases and everything in between, there is a clear and unbreakable separation between the code and data. The data belongs to the user, not the programmer, so it can never be hard-coded.

What you seem to be talking about is, to use the most technically accurate description available to my current vocabulary: information governing program behaviour which is not written in the primary programming language used to develop the majority of the application.

Even this definition, which is considerably less ambiguous than just the word ""data"", has a few problems. For example, what if significant parts of the program are each written in different languages? I have personally worked on several projects which are about 50% C# and 50% JavaScript. Is the JavaScript code ""data""? Most people would say no. What about the HTML, is that ""data""? Most people would still say no.

What about CSS? Is that data or code? If we think of code as being something that controls program behaviour, then CSS isn't really code, because it only (well, mostly) affects appearance, not behaviour. But it isn't really data, either; the user doesn't own it, the application doesn't even really own it. It's the equivalent of code for a UI designer. It's code-like, but not quite code.

I might call CSS a kind of configuration, but a more practical definition is that it is simply code in a domain-specific language. That's what your XML, YAML, and other ""formatted files"" often represent. And the reason we use a domain-specific language is that, generally speaking, it's simultaneously more concise and more expressive in its particular domain than coding the same information in a general-purpose programming language like C or C# or Java.

Do you recognize the following format?

{
    name: 'Jane Doe',
    age: 27,
    interests: ['cats', 'shoes']
}

I'm sure most people do; it's JSON. And here's the interesting thing about JSON: In JavaScript, it's clearly code, and in every other language, it's clearly formatted data. Almost every single mainstream programming language has at least one library for ""parsing"" JSON.

If we use that exact same syntax inside a function in a JavaScript file, it can't possibly be anything other than code. And yet, if we take that JSON, shove it in a .json file, and parse it in a Java application, suddenly it's ""data"". Does that really make sense?

I argue that the ""data-ness"" or ""configuration-ness"" or ""code-ness"" is inherent to what is being described, not how it's being described.

If your program needs a dictionary of 1 million words in order to, say, generate a random passphrase, do you want to code it like this:

var words = new List<string>();
words.Add(""aa"");
words.Add(""aah"");
words.Add(""ahhed"");
// snip 172836 more lines
words.Add(""zyzzyva"");
words.Add(""zyzzyvas"");

Or would you just shove all those words into a line-delimited text file and tell your program to read from it? It doesn't really matter if the word list never changes, it's not a question of whether you're hard-coding or soft-coding (which many rightly consider to be an anti-pattern when inappropriately applied), it's simply a question of what format is most efficient and makes it easiest to describe the ""stuff"", whatever the ""stuff"" is. It's fairly irrelevant whether you call it code or data; it is information that your program requires in order to run, and a flat-file format is the most convenient way to manage and maintain it.

Assuming you follow proper practices, all of this stuff is going into source control anyway, so you might as well call it code, just code in a different and perhaps very minimalistic format. Or you can call it configuration, but the only thing that truly distinguishes code from configuration is whether or not you document it and tell end users how to change it. You could perhaps invent some bogus argument about configuration being interpreted at startup time or runtime and not at compile time, but then you'd be starting to describe several dynamically-typed languages and almost certainly anything with a scripting engine embedded inside of it (e.g. most games). Code and configuration are whatever you decide to label them as, nothing more, nothing less.

Now, there is a danger to externalizing information that isn't actually safe to modify (see the ""soft coding"" link above). If you externalize your vowel array in a configuration file, and document it as a configuration file to your end users, you are giving them an almost foolproof way to instantly break your app, for example by putting ""q"" as a vowel. But that is not a fundamental problem with ""separation of code and data"", it's simply bad design sense.

What I tell junior devs is that they should always externalize settings that they expect to change per environment. That includes things like connection strings, user names, API keys, directory paths, and so on. They might be the same on your dev box and in production, but probably not, and the sysadmins will decide how they want it to look in production, not the devs. So you need a way of having one group of settings applied on some machines, and other settings applied on other machines - ergo, external configuration files (or settings in a database, etc.)

But I stress that simply putting some ""data"" into a ""file"" isn't the same as externalizing it as configuration. Putting a dictionary of words into a text file doesn't mean that you want users (or IT) to change it, it's just a way of making it much easier for developers to understand what the hell is going on and, if necessary, make occasional changes. Likewise, putting the same information in a database table does not necessarily count as externalization of behaviour, if the table is read-only and/or DBAs are instructed never to screw with it. Configuration implies that the data is mutable, but in reality that is determined by process and responsibilities rather than the choice of format.

So, to summarize:

  • ""Code"" is not a rigidly-defined term. If you expand your definition to include domain-specific languages and anything else which affects behaviour, a lot of this apparent friction will simply disappear and it will all make sense. You can have non-compiled, DSL ""code"" in a flat file.

  • ""Data"" implies information that is owned by the user(s) or at least someone other than the developers, and not generally available at design time. It could not be hard-coded even if you wanted to do so. With the possible exception of self-modifying code, the separation between code and data is a matter of definition, not personal preference.

  • ""Soft-coding"" can be a terrible practice when over-applied, but not every instance of externalization necessarily constitutes soft-coding, and many instances of storing information in ""flat files"" is not necessarily a bona fide attempt at externalization.

  • Configuration is a special type of soft-coding that is necessary because of the knowledge that the application may need to run in different environments. Deploying a separate configuration file along with the application is far less work (and far less dangerous) than deploying a different version of the code to every environment. So some types of soft-coding are actually useful.

","3249","","","","","2014-02-25 00:39:27","","","","0","","","","CC BY-SA 3.0" "29475","2","","29109","2010-12-23 13:19:58","","170","","

I see way too many C programmers that hate C++. It took me quite some time (years) to slowly understand what is good and what is bad about it. I think the best way to phrase it is this:

Less code, no run-time overhead, more safety.

The less code we write, the better. This quickly becomes clear in all engineers that strive for excellence. You fix a bug in one place, not many - you express an algorithm once, and re-use it in many places, etc. Greeks even have a saying, traced back to the ancient Spartans: ""to say something in less words, means that you are wise about it"". And the fact of the matter, is that when used correctly, C++ allows you to express yourself in far less code than C, without costing runtime speed, while being more safe (i.e. catching more errors at compile-time) than C is.

Here's a simplified example from my renderer: When interpolating pixel values across a triangle's scanline. I have to start from an X coordinate x1, and reach an X coordinate x2 (from the left to the right side of a triangle). And across each step, across each pixel I pass over, I have to interpolate values.

When I interpolate the ambient light that reaches the pixel:

  typedef struct tagPixelDataAmbient {
      int x;
      float ambientLight;
  } PixelDataAmbient;

  ...
  // inner loop
  currentPixel.ambientLight += dv;

When I interpolate the color (called ""Gouraud"" shading, where the ""red"", ""green"" and ""blue"" fields are interpolated by a step value at each pixel):

  typedef struct tagPixelDataGouraud {
      int x;
      float red;
      float green;
      float blue;  // The RGB color interpolated per pixel
  } PixelDataGouraud;

  ...
  // inner loop
  currentPixel.red += dred;
  currentPixel.green += dgreen;
  currentPixel.blue += dblue;

When I render in ""Phong"" shading, I no longer interpolate an intensity (ambientLight) or a color (red/green/blue) - I interpolate a normal vector (nx, ny, nz) and at each step, I have to re-calculate the lighting equation, based on the interpolated normal vector:

  typedef struct tagPixelDataPhong {
      int x;
      float nX;
      float nY;
      float nZ; // The normal vector interpolated per pixel
  } PixelDataPhong;

  ...
  // inner loop
  currentPixel.nX += dx;
  currentPixel.nY += dy;
  currentPixel.nZ += dz;

Now, the first instinct of C programmers would be ""heck, write three functions that interpolate the values, and call them depending on the set mode"". First of all, this means that I have a type problem - what do I work with? Are my pixels PixelDataAmbient? PixelDataGouraud? PixelDataPhong? Oh, wait, the efficient C programmer says, use a union!

  typedef union tagSuperPixel {
      PixelDataAmbient a;
      PixelDataGouraud g;
      PixelDataPhong   p;
  } SuperPixel;

..and then, you have a function...

  RasterizeTriangleScanline(
      enum mode, // { ambient, gouraud, phong }
      SuperPixel left,
      SuperPixel right)
  {
      int i,j;
      if (mode == ambient) {
          // handle pixels as ambient...
          int steps = right.a.x - left.a.x;
          float dv = (right.a.ambientLight - left.a.ambientLight)/steps;
          float currentIntensity = left.a.ambientLight;
          for (i=left.a.x; i<right.a.x; i++) {
              WorkOnPixelAmbient(i, dv);
              currentIntensity+=dv;
          }
      } else if (mode == gouraud) {
          // handle pixels as gouraud...
          int steps = right.g.x - left.g.x;
          float dred = (right.g.red - left.g.red)/steps;
          float dgreen = (right.g.green - left.a.green)/steps;
          float dblue = (right.g.blue - left.g.blue)/steps;
          float currentRed = left.g.red;
          float currentGreen = left.g.green;
          float currentBlue = left.g.blue;
          for (j=left.g.x; i<right.g.x; j++) {
              WorkOnPixelGouraud(j, currentRed, currentBlue, currentGreen);
              currentRed+=dred;
              currentGreen+=dgreen;
              currentBlue+=dblue;
          }
...

Do you feel the chaos slipping in?

First of all, one typo is all that is needed to crash my code, since the compiler will never stop me in the ""Gouraud"" section of the function, to actually access the "".a."" (ambient) values. A bug not caught by the C type system (that is, during compilation), means a bug that manifests at run-time, and will require debugging. Did you notice that I am accessing left.a.green in the calculation of ""dgreen""? The compiler surely didn't tell you so.

Then, there is repetition everywhere - the for loop is there for as many times as there are rendering modes, we keep doing ""right minus left divided by steps"". Ugly, and error-prone. Did you notice I compare using ""i"" in the Gouraud loop, when I should have used ""j""? The compiler is again, silent.

What about the if/else/ ladder for the modes? What if I add a new rendering mode, in three weeks? Will I remember to handle the new mode in all the ""if mode=="" in all my code?

Now compare the above ugliness, with this set of C++ structs and a template function:

  struct CommonPixelData {
      int x;
  };
  struct AmbientPixelData : CommonPixelData {
      float ambientLight;
  };
  struct GouraudPixelData : CommonPixelData {
      float red;
      float green;
      float blue;  // The RGB color interpolated per pixel
  };
  struct PhongPixelData : CommonPixelData {
      float nX;
      float nY;
      float nZ; // The normal vector interpolated per pixel
  };

  template <class PixelData>
  RasterizeTriangleScanline(
      PixelData left,
      PixelData right)
  {
      PixelData interpolated = left;
      PixelData step = right;
      step -= left;
      step /= int(right.x - left.x); // divide by pixel span
      for(int i=left.x; i<right.x; i++) {
          WorkOnPixel<PixelData>(interpolated);
          interpolated += step;
      }
  }

Now look at this. We no longer make a union type-soup: we have specific types per each mode. They re-use their common stuff (the ""x"" field) by inheriting from a base class (CommonPixelData). And the template makes the compiler CREATE (that is, code-generate) the three different functions we would have written ourselves in C, but at the same time, being very strict about the types!

Our loop in the template cannot goof and access invalid fields - the compiler will bark if we do.

The template performs the common work (the loop, increasing by ""step"" in each time), and can do so in a manner that simply CAN'T cause runtime errors. The interpolation per type (AmbientPixelData, GouraudPixelData, PhongPixelData) is done with the operator+=() that we will add in the structs - which basically dictate how each type is interpolated.

And do you see what we did with WorkOnPixel<T>? We want to do different work per type? We simply call a template specialization:

void WorkOnPixel<AmbientPixelData>(AmbientPixelData& p)
{
    // use the p.ambientLight field
}


void WorkOnPixel<GouraudPixelData>(GouraudPixelData& p)
{
    // use the p.red/green/blue fields
}

That is - the function to call, is decided based on the type. At compile-time!

To rephrase it again:

  1. we minimize the code (via the template), re-using common parts,
  2. we don't use ugly hacks, we keep a strict type system, so that the compiler can check us at all times.
  3. and best of all: none of what we did has ANY runtime impact. This code will run JUST as fast as the equivalent C code - in fact, if the C code was using function pointers to call the various WorkOnPixel versions, the C++ code will be FASTER than C, because the compiler will inline the type-specific WorkOnPixel template specialization call!

Less code, no run-time overhead, more safety.

Does this mean that C++ is the be-all and end-all of languages? Of course not. You still have to measure trade-offs. Ignorant people will use C++ when they should have written a Bash/Perl/Python script. Trigger-happy C++ newbies will create deep nested classes with virtual multiple inheritance before you can stop them and send them packing. They will use complex Boost meta-programming before realizing that this is not necessary. They will STILL use char*, strcmp and macros, instead of std::string and templates.

But this says nothing more than... watch who you work with. There is no language to shield you from incompetent users (no, not even Java).

Keep studying and using C++ - just don't overdesign.

","11481","","11481","","2016-11-06 17:27:38","2016-11-06 17:27:38","","","","8","","","2011-07-25 12:44:58","CC BY-SA 3.0" "126536","2","","116531","2011-12-22 13:18:01","","3","","

I recently asked a question about tests in game development - this is BTW how I knew about this one. The answers there pointed some curious, specific disadvantages:

  1. It is costly to do when your code should be highly coupled.
  2. It is difficult to do when you have to be aware of the various hardware platforms, when you should analyze output to the user and the code result only makes sense in a broader context.
  3. UI and UX testing is very hard.
  4. And notably, automated tests can be more expensive and less effective than a bunch of very low-cost (or free) beta testers.

The 4th point makes me remember of some experience of mine. I worked on a very lean, XP-oriented, Scrum managed company where unit tests were highly recommended. However, in its path to a leaner, less bureaucratic style, the company just neglected the construction of a QA team - we had no testers. So frequently the customers found trivial bugs using some systems, even with test coverage of >95%. So I would add another point:

  • Automated tests may make you feel that QA and testing are not important.

Also, I was thinking those days about documentation and cogitated a hypothesis that may be valid (to a lesser extend) to tests two. I just felt that code evolves so quickly that it is pretty hard to make documentation that follows such a velocity, so it is more valuable to spend time making code readable than writing heavy, easily outdated documentation. (Of course, this does not apply to APIs, but only to internal implementation.) The test suffers a bit from the same problem: it may be too slow to write when compared with the tested code. OTOH, it is a lesser problem because the tests warn they are outdated, while your documentation will stay silent as long as you do not reread it very, very carefully.

Finally, a problem I find sometimes: automated testing may depend upon tools, and those tools may be poorly written. I started a project using XUL some time ago and, man, this is just painful to write unit tests for such platform. I started another application using Objective-C, Cocoa and Xcode 3 and the testing model on it was basically a bunch of workarounds.

I have other experiences about disadvantages of automated testing, but most of them are listed in other answers. Nonetheless, I am a vehement advocate of automated testing. This saved an awful lot of work and headache and I always recommend it by default. I judge those disadvantages are just mere details when compared to the benefices of automated testing. (It is important to always proclaim your faith after you comment heresies to avoid the auto da fé.)

","27229","","-1","","2017-04-13 12:18:42","2011-12-22 13:18:01","","","","0","","","","CC BY-SA 3.0" "231071","2","","231031","2014-03-03 13:30:22","","2","","

In two years of doing scrum, we've never failed to reach consensus. Measuring consensus is simple: just ask if there are any open concerns about the plan. Watch for people who don't speak up, but are shaking their head. Don't end the meeting until everyone is on board.

The great thing about scrum is your mistakes don't last very long. The worst I've ever had to do to gain consensus is say something like, ""You may be right, but the rest of the team disagrees. Are you willing to try it for two weeks and reevaluate in our next retrospective?""

","3965","","","","","2014-03-03 13:30:22","","","","0","","","","CC BY-SA 3.0" "231156","2","","230905","2014-03-04 05:31:06","","1","","

we faced with same thing as well, basically we committed something like 20 points but at last week or even middle of sprint we ran out of coding task however because of Testing and rest of process we didn't risk to pick another PBI, So what programmers did was to to look into the backlog and start developing future PBIs (silently!) and informing the rest of team in planning that PBI is ready for code review and testing! just like you said.

It actually raised some concern from our POs that it seems we are capable more but we don't fully utilize our team's potential, which was partly true but yes, maybe our programmers could do more but our testers couldn't follow up that speed so there was a risk to fail the sprint. after thinking about this issue we found out that we need to change our view about scrum and the main issue is that people don't want to take that risk is because we Commit PBIs so team didn't feel good to take that risk of picking new PBIs in case that we have free programmer.

Simply we started to Forecast PBIs rather than make commitment. Forecasting basically means we pick 25 points at planning and start of sprint, And when programmer gets free at middle of sprint, because there is no more coding task so he/she picks one of the future PBI and put it in Current Sprint and start working on it, if the PBI could pass all of the process(testing, merging and etc) within same sprint, it is bonus point for team if NOT we don't fail the sprint because of that PBI and just carrying forward the remaining work(testing or meging or etc) to the next sprint and re poker for remaining job. So it can be done in two different sprint in worse case scenario. I know it might sounds like Scrumbut but it actually improved the way that we work. I just can summarize its benefits as below:

  • It defeats the phobia of failing sprint because of taking the risk of taking more PBI
  • It makes the extra work of your programmers and team visible
  • It increases your team velocity

However maybe for a team with less experience, maybe it reduce the push that commitment gives to the team to finish the PBIs

","73926","","","","","2014-03-04 05:31:06","","","","0","","","","CC BY-SA 3.0" "31020","2","","30985","2010-12-28 15:48:22","","14","","

To be able to follow the basic rules, they need to know what the rules are and they need to agree with them.

The way to handle this is to jointly create a coding guideline document which everyone can agree with. Don't try forcing this on them, it will backfire if you do.

So get the team together and start working on a common definition of your basic rules!

Do it as a workshop where all voices are heard. Timebox it to avoid endless discussions. You are trying to bringing several minds together, so you may want to set the stage with a positive note that you all should be respectful and keep an open mind (code writing is personal...).

These guidelines should be alive changed whenever the team feels there is something that should be added or clarified.

","5692","","","","","2010-12-28 15:48:22","","","","1","","","","CC BY-SA 2.5" "128390","2","","128389","2012-01-04 20:53:26","","7","","

It depends on your general philosophy around errors and error handling. I am the ""hard error"" type of guy: I will throw an exception at the slightest hint that something might be wrong; I will assert everything; If there is an error, if something was expected to be there, and it is not, or if something is there, and it shouldn't, the entire universe must stop. The windows exclamation sound must ring ominously through the loudspeakers.

There are other people who would rather not be bothered with errors. So what if we ship it to the client and the entire reports module is missing because we forgot to code it, and nobody in testing realized it, because the application was all too silent about it? Better do nothing, than throw an exception in the client's face!

","41811","","41811","","2012-01-04 20:58:41","2012-01-04 20:58:41","","","","2","","","","CC BY-SA 3.0" "128827","1","128830","","2012-01-06 15:41:52","","2","247","

According to this article from the BBC and this post on Microsoft's Exploring IE Blog, Microsoft has planned to automatically upgrade Windows XP, Windows Vista, and Windows 7's IE to the latest version.

Microsoft has stated that they will also allow people to opt-out of this update through the use of the The Internet Explorer 8 and Internet Explorer 9 Automatic Update Blocker toolkits and will respect any previous declined automatic updates. I assume managed corporate IE managers can also block whatever they want.

Microsoft's blog post states that the release will begin in Australia and Brazil to be ""scaled up as time goes on.""

I'm wondering, from the perspective of a web designer, how rapidly these upgrades are likely to be propagated to average end users, but also especially those in tightly-regulated corporate IT infrastructures. Given best (or common ;) practices in the corporate world, as well as average end-user behaviour, how soon can I stop investing a significant amount of time in backwards-compatibility with IE6 and 7?

","9492","","","","","2012-01-06 17:28:42","How quickly is the silent IE upgrade likely to propagate to end users? What about managed corporate IT infrastructures?","","2","4","","2017-12-06 11:47:36","","CC BY-SA 3.0" "232520","2","","232507","2014-03-15 21:27:35","","4","","

Having been in your situation on a few occasions, I've found that UML diagrams and the like are not a good way to go. They tend to be overly complex, and much more difficult to communicate about - now your audience not only has to understand what you said in english, they also have to be familiar with UML diagrams and know how to apply them to the work they're doing.

The goal is not to come up with what you consider to be cleaner/more concise/more accurate ways to write down your instructions, but to make sure that your instructions are more easy to understand/follow for a non-english speaking audience, that is the product of a different educational system. With that in mind - UML diagrams will only help if your target audience is capable of understanding them better than spoken/written english.

The following approaches/ideas have worked for me:

  • You're stuck being a project manager. You might have to accept this means an entirely different set of skills, work, etc... that you'll find yourself using - a different job description from being a software developer. You might also have to accept (and explain to any overlords you may be reporting to) that this is a time consuming job - you can avoid having to rewrite their code, but there's other work you'll have to do instead, which will impact your job as a software developer

  • Make sure that you're regularly speaking with the people actually writing your code - try to eliminate the intermediary level(s) of management.

This might mean going for a more involved micro-management approach (say, daily agile-style standups, where you do video chats with the dev teams in groups of less than 7, and have them each report on what they've done today, what they're stuck on, what they'll do tomorrow). Or it might mean trying to get the lead developers to show up to your weekly meetings, and getting them involved in the discussion about what code they've written/they're about to write. Maybe you can put them on your chat contact list, and, when you code-review their work, ask them directly about code they've written; cultivate a friendly relationship.

The goal is to open up communication channels more - so that the people who are actually writing the code understand that (a) you're holding them to a higher standard, (b) you actually care about them, and the work that they're doing and (c) making yourself available more easily/quickly for any questions they have when they interpret your instructions.

  • Learn to speak ""their"" version of english. More frequent communication should help with this - expose yourself to as many different people on the otherside, all speaking about things that you're intimately familiar with, and try to pay attention to/pick up idioms, expressions, etc... that they are using - and then to use them when describing your requirements.

  • Add unit testing to your list of requirements. Have them write a unit test framework, and write unit tests for each story - it's often easier to send over an extra unit test and ask that it be green, than to explain what you want; maybe not everyone involved speaks english, but everyone involved should, theoretically, speak code.

  • Make sure that you include UAT with each story description, not just requirements. Use a standard pattern - ""As a {user description} i want to do {some action} which results in {specific description of result}"". Ask them to provide you with tests that show the code actually does what you asked it to do - again, it may be faster/easier to modify tests and send them back, than to re-explain yourself

  • Most outsourcing outfits are extrinsically motivated; they're in it for the money. For development work, especially if it requires any creativity, that's a major downfall. Either split the work up so that anything you send to them is the kind of work that's easy for people who are only in it for the money to do (work that doesn't take creative thinking is ideal), or get them to be more intrinsically motivated.

For that last bit - the low hanging fruit include switching to video chat for all your communication; learning about things like holidays, birthdays, etc... and mentioning them on calls; making it a point to highlight particularly well done work individually, and with a bit of delay - and highlighting bugs/issues as soon as they're discovered, and with a can-do-team-approach tone of voice (""here's a bug - can we get this fixed?"" vs. ""who wrote this? sam? dude, your code here was awesome!""), etc... If you do manage to get to know the devs, it should help - find out what they each like about the work you're sending, and see if you can help them to do the bits they like.

","4091","","","","","2014-03-15 21:27:35","","","","0","","","","CC BY-SA 3.0" "232816","2","","232812","2014-03-18 16:50:33","","8","","

In my experience, the argument against ""too many class files"" is always perpetuated by people who simply aren't disciplined enough to separate responsibilities well. Your experience of finding 4500+-line code files is exactly the same experience I have with these teams. Files will tend to grow, so it's a good practice to start them small anyway.

A common counter-argument I hear is that it's ""confusing to debug"" code that is passed on from one class to another, etc., but in reality, you shouldn't care about all the implementation details of all the classes involved in a flow, so you shouldn't need to step through all of them in the first place. Further, if you have proper unit tests set up, you shouldn't need to be stepping through the code incredibly often.

Speaking of unit testing, having many separate classes makes your code much more testable, and that is a good thing. I am willing to bet that the test coverage on this current project is all but non-existent, since unit testing would reveal the problems with the architecture of this codebase right away.

It sounds like you're in the unfortunate situation of being in a somewhat toxic environment in which popular opinion trumps reason, and it can be frustrating to be the only reasonable voice in a group, but don't stop fighting the good fight. Point them to resources by professionals, like Robert Martin or Martin Fowler. It's likely that your team is so rooted in their ways and tied to their ego that they won't budge, but don't stop pushing them in the right direction.

If by some miracle you have an existing unit test solution set up for your project, you might want to try the passive-aggressive introduction of unit-tests-as-guards against bad code that your team may be writing. On a toxic team I worked on, I found it helpful to refactor code into testable chunks and write unit tests around it, guaranteeing that my coworkers would face pain if they ever tried to change things for the worse. (Of course, this might just lead them to the conclusion that ""unit testing is bad!"" so try at your own risk. ;) )

","116161","","","","","2014-03-18 16:50:33","","","","0","","","","CC BY-SA 3.0" "349623","2","","349616","2017-05-26 00:05:05","","1","","

The method exists to do something. That's its contract. If it fails to do whatever it was expected to do, an exception should be thrown. The issue with return codes is that people can ignore them, which silently hides issues with the execution of the program.

","24081","","","","","2017-05-26 00:05:05","","","","6","","","","CC BY-SA 3.0" "129736","2","","78407","2012-01-12 11:42:25","","2","","

There is no well marked line between OOP and procedural. Just saying, there's a number of people (GTK APIs used to, IIRC) that map OOP over procedural languages.

And then there's a silent majority that develops incrementally by starting from a main(), adding objects as they seem fit, using them as tools and not like Platonic Ideas. When things are getting too big to handle, they write classes and diagrams like everybody does.

Then again, there's still some big design before finalizing contract in software engineering, so you may wonder how procedural-first firms could handle this. First of all there are not many of those firms, as it's harder to split tasks without objects.

Then, you know, there where many. They used control flow diagrams. With diamonds and rectangles and stuff. Nowadays they could use statecharts, and, in protocol interactions over network or inter-process, sequence diagrams.

","30396","","","","","2012-01-12 11:42:25","","","","0","","","","CC BY-SA 3.0" "129963","2","","129961","2012-01-13 13:43:37","","7","","

Are there any good techniques for choosing a well-meaning name for a component, or how to build a family of components without getting muddled names?

Not really.

One tip, however, is to avoid ""passive voice"".

""DomainSecurityMetadataProvider"" is passive. There's no verb, it's all nouns and nouns used as adjectives.

""ProvideMetadataForDomainSecurity"" is more active. There's a verb.

Object-oriented programming is all (really) noun-verb. Noun == object. Verb == method. So a class name is generally very ""nounish"". To break this habit, and start inserting verbs, is difficult.

Are there any simple tests that I can apply to a name to get a better feel for whether the name is ""good"", and should be more intuitive to others?

Yes. You defined an excellent test in your question. Ask other people.

In the olden days we called this a ""design walkthrough"". It was a big, sweaty deal in a waterfall methodology. Nowadays, with Agile methods, it should be an ordinary collaboration between the author and users of a class. It doesn't (and shouldn't) take long. Discussing the design (and the names) before writing the tests (and the code) will reduce the astonishment factor and can prevent WTF's.

","5834","","5834","","2012-01-13 13:49:57","2012-01-13 13:49:57","","","","5","","","","CC BY-SA 3.0" "129967","2","","129950","2012-01-13 14:06:41","","9","","

Have you considered having an adult conversation with this person? Let them know their constant questions are a productivity killer, and ask why he feels he has to constantly ask you seemingly simple questions. Maybe he is a bit incompetent. You can choose to let him fail, or you can choose to help him succeed.

Ideally, let him know you are wiling to help if he's truly stuck, but that you expect him to give you the respect you deserve and do a little independent research first. Continually giving him answers to simple questions helps no one. Encouraging him to learn and grow helps the whole team.

Yes, it will be an uncomfortable conversation, but it will be less uncomfortable than a few more months of silent resentment.

","6586","","6586","","2012-01-13 15:40:37","2012-01-13 15:40:37","","","","4","","","2012-01-13 17:03:30","CC BY-SA 3.0" "233772","2","","233766","2014-03-26 14:19:29","","8","","

One product, one source code, different data.

There are not really enough details to know whether my answer is the right one, but your problem is a common one for solution providers with multiple customers who have slightly different requirements. The answers always some to converge on: Just One Product.

When you build, you build everything and when you run tests, you run all tests. That way one team doesn't silently break something for someone else and everything works all the time.

When you release, you release everything. You run short cycles and you only ship what needs to be shipped, but everything is there all the time for testing.

You may ship code for platform A that is not used on platform B, and you need configuration data to switch things on and off. Depending on your needs, you may converge on a solution that is largely data driven with little or no custom code. Or you might use a Domain Specific Language, or a scripting language to encapsulate differences.

This is a very good answer to a lot of questions. Perhaps it is the answer to yours.

","114930","","","","","2014-03-26 14:19:29","","","","0","","","","CC BY-SA 3.0" "130125","2","","130119","2012-01-14 12:21:25","","3","","

Define who you believe your customers to be. Are they the people who buy your company's products, are they the people within your company who might use and test your code, or are they perhaps both? When you write your code and submit it for testing, do you complete the entire product first, or do you deliver it in stages to be tested and signed off?

Whether Agile works in any company all comes down to the mind-set of the the people working with it, how the team thinks about it, and how committed they are towards making an Agile methodology work for them.

Ultimately, Agile is suited to the team that wants to make it work for them, and yes this means it can be successfully implemented in even some of the more ""scary"" development situations provided the methodology is adapted to suit. It doesn't matter if you feel your company is service or product focused.

The thing to remember is that going Agile isn't about copying someone else's methodology verbatim, which would be fine if the methodology can gel with your particular company's needs. In the end, when changing the way you go about writing software, something has to give. It's about achieving a compromise between your vision of an agile workflow, and the existing business processes that may need to be modified to accommodate agile processes. This is not something you can easily change overnight, but which changes gradually, and in stages as you fine tune both the agile and business process so that they will gel well with each other. It may also require a shift in your thinking, so that you apply all of the agile concepts that your developers need, and ignore the ones that might create problems for your team, and your ""agile customer"" becomes a ""customer representative"" within the company who takes responsibility for being the product champion, acting as the customer's ""voice"" in the team just to keep everyone firmly grounded.

In the company where I work, we probably only do about half of the things that all of the standard methodologies recommend we should do, and for the rest, we've fine-tuned things to suit ourselves. Whenever we run into something that is inefficient, we implement changes to the method to deal with problem. This can happen from project to project on rare occasions, depending on who we are working with, and who we end up working for. So in the end, how you specifically go about implementing agile in your workplace is really down to you to ultimately decide. The key to it all is to simply keep an open mind, and to regularly re-evaluate your team's performance, removing practices that make things worse, and introducing the practices that make things better.

Good luck.

","39178","","39178","","2012-01-14 20:56:47","2012-01-14 20:56:47","","","","0","","","","CC BY-SA 3.0" "350182","2","","350179","2017-06-05 13:30:35","","4","","

UDP vs TCP

UDP is lightweight, stateless, and lossy.

TCP is heavy, statefull, and robust.

UDP is best when transmission errors can be ignored. TCP is best when transmission errors need correction. Voice over IP uses UDP because of how time sensitive voice conversations are. It's better to just hear a hiccup and be back in sync then to ask for another copy of a lost packet and end up lagging further behind.

In your use case I'd hate to find myself stabbing at the phone trying to get it to move only to see it sporadically show my movements later in a burst. Keep up with me.

Frequency

Every 0.5ms (a tuning variable) is 2000Hz and frankly faster then most monitors 60Hz-120Hz refresh rate. Adjusting this may solve some of your problems. It should allow you to have about 20 times the people connected before the same problem shows up. Write your software so this number is decided in one place so you can adjust it easily to experiment with your real needs.

If you want to make a bigger impact then just 20 times as big, consider cutting down the overhead. Rather then continually transmitting one x and y packet, try transmitting a few of them in a burst. The duration of the burst will add to your lag but it will be consistent. Small enough and it will be barely perceptible.

This idea would work well with frame buffering. Rather then just have one frame being rendered at a time video games work on rendering multiple ones at a time and let them be consumed in order. If you have 10 frames buffered and a packet comes in with 10 sampled locations you render each to their buffered frames.

Doing all this takes your .5ms rate to 0.1s and means your audience can be 200 times as big. That might not be acceptable lag for some video games but it should work fine for fireflies.

Some people might be connected but not actually moving their firefly at the moment. Some unreliable savings can be gained by having their smartphones keep quiet until they have something useful to say.

UI

Your firefly game reminds me of a myth busters where they tried to burn a ship by having a crowd hold mirrors in the sun. The biggest problem they had was no one could tell whose reflection was whose. This meant they couldn't focus because they couldn't tell if they needed to move up, down, left, or right to be on target.

Consider giving out groups of colors. If I'm one green dot among hundreds I have no hope of following my dot. If I'm one of ten red dots I have a chance even if there are 90 other colors bouncing around. Assuming I'm not color blind (red/green is the most common). Good color choices should be able to minimize this impact.

Directly to the Unity app

You are basically making a server when you do this. You will need to listen on a port for users to connect.

Remember to clean up the old location before moving them to a new location if you keep them on screen even after they've stopped transmitting for the moment. If you do your statefull again and must remember to keep drawing them in place until they time out. A way to handle that is to make them keep their own state and transmit where they were before so you can use that to clean up the old location.

Or you can only draw when told to draw. This is simpler but now packet loss turns into firefly flicker.

","131624","","131624","","2017-06-05 13:49:49","2017-06-05 13:49:49","","","","6","","","","CC BY-SA 3.0" "35715","1","35716","","2011-01-11 19:02:19","","5","498","

I'm working on a greenfields project with two other developers. We're all contractors, and myself and one other just started working on the project while the orginal one has been doing most of the basic framework coding. In the past month, my fellow programmer and I have been just frustrated by the design descisions done by our co-worker.

Here's a little background information:

The application at face value appeared to be your standard n-layered web application using C# on the 3.5 framework. We have a data layer, business layer and a web interface. But as we got deeper into the project we found some very interesting things that have caused us some troubles. There is a custom data access sqlHelper type base which only accepts dictionary key/valued entries and returns only data tables. There are no entity objects, but there are some massive objects which do everything and then are tossed into session for persitance.

The general idea is that the pages (.aspx) don't do anything, while the controls (.ascx) do everything. The general flow is that a client clicks on a button, which goes to a user control base which passes a process request to the 'BLL' class which goes to the page processor, which then goes to a getControlProcessor, which at last actually processes the request. The request itself is made up of a dictionary which is passing a string valued method name, stored procedure name, a control name and possibly a value. All switching of the processing is done by comparing the string values of the control names and method names.

Pages are linked together via a common header control that uses a combination of javascript and tables to create a hyperlink effect. And as I found out yesterday, a simple hyperlink between one page and another does not work because of the need to have quite a bit of information in session to determine which control to display on a page.

My fellow programmer and I both believe that this is a strange and uncommon approach to web application development. Both of us have been in this business for over five years and neither of us have seen this approach.

My question is this, how would we approach our co-worker and voice our concerns and what should we do if he does not want to accept the criteic? We both do not want to insult the work that has been done, but feel that going forward will create a nightmare for development.

Thanks for your comments.

","7428","","25936","","2012-03-08 14:08:20","2012-03-08 14:08:20","If most of team can't follow the architecture, what do you do?","","2","0","2","","","CC BY-SA 2.5" "36032","2","","35819","2011-01-12 16:09:44","","8","","

It depends on the company.

My job is exactly like this. I'm a software developer, but since we're a fairly small company, each developer takes on an ""unofficial"" support role usually based around their own software. Some developers have to do more support than others, depending on a number of factors such as how many products they have developed/shipped, how buggy their products are, and how effective they are at support. If you can provide the customer with exactly what they need to solve the problem, they will keep coming back to you to get issues resolved as quickly as possible. Double edged sword? Yes. You suffer from reduced productivity, but the customer is happy, and more likely to remain a customer. This is important for smaller companies.

We do have a systems support team, but because of the nature of what we do, they mostly have to deal with hardware related issues. Personally, in a smaller company, this issue isn't as disruptive as one might imagine. Sure, you get calls while you're trying to work out some important feature, but at the same time, the customer service is much improved; they can have an authoritative voice that knows (in many cases) how to solve their problem instead of someone with second-hand information and a support script. If you can't solve the problem there and then, can reassure them personally that you will implement a fix for their bug, or consider their feature request for a future release. You can get real feedback straight from the users of your software, so your next version can be even better than you already think it is.

I like to think that happy customers create a more positive image of your company, which usually leads to more customers. And that's why, as a software engineer, I like tech support.

","5136","","","","","2011-01-12 16:09:44","","","","1","","","2011-03-07 16:25:16","CC BY-SA 2.5" "130906","2","","130645","2012-01-19 18:15:28","","2","","

At some point you have to be in charge. You sound like you're making an effort to let them voice their opinions. Your suggestions may not be perfect. The other devs may not understand/agree with you. They probably don't agree with each other. If you are in charge, it's not a democracy. They knew that when they took the job.

If there are no situations where they must follow you, you don't deserve and serve no purpose as their boss. Change your role in the team to be a resource and not an authority if you don't plan on using it. At some point you have to ship the best code you can under the time constraints that are available and can't debate, research and debate again every line of code until the end of time.

Give the orders. Live with the consequences. Learn from experience. Respect is a two-way street. You're demonstrating it and they're not.

","855","","","","","2012-01-19 18:15:28","","","","1","","","","CC BY-SA 3.0" "131032","2","","131029","2012-01-20 13:38:11","","4","","

The biggest thing that I see is missing is keeping track of what the user has actually paid. Some people pay an amount more or less than the amount due, so some may have a balance due from the last invoice while others may have pre-paid for several periods ahead.

EDIT: Based upon your comment, I see this is handled seperately, great!

The other thing I see is the way you are handling the TIMESTAMP field. For example:

Paid monthly

if ( Timestamp is in past = true AND Month gone by = 0 AND Days gone by >= 20 )

then ( create a new invoice and set ""Last Due Date"" to time() )

Suppose I initially signed up on 1/1/2012, then Timestamp starts off with that date. Assuming I pay monthly, this will mean you generate an invoice on 1/20/2012 and set the Timestamp to 1/20/2012. Does this mean you generate an invoice every 20 days rather than once per month? In other words will you generate an invoice on 2/9/2012 (20 days after 1/20/2012)?

The point is that the next invoice should be generated based upon the end date of the current billing period, not the date the invoice was generated.

Currently you ""set ""Last Due Date"" to time()"", perhaps you want to set Last Due Date to the first day of the next billing period? So for example when generating monthly invoices on 1/20/2012 you would change 1/1/2012 (current value in DB) to 2/1/2012. You don't state which database you are using, but many have built-in functions to add 1 month, quarter, year etc.

","13730","","13730","","2012-01-20 14:46:57","2012-01-20 14:46:57","","","","12","","","","CC BY-SA 3.0" "36575","2","","36561","2011-01-14 05:22:18","","3","","

How large and intricate is the codebase you were just introduced to? That can play a big factor (especially if there's a lack of documentation)

I often feel there's a silent war going on between the juniors and seniors. It comes down to petty stuff like people trying to put themselves on a pedestal and put you down in an attempt to show their own value.

Think of any lack of documentation as a practical joke they played on you before you even became a junior developer.

These people aren't teachers; they're as territorial as any of the other suits and don't question it for a second. Clearly no one has taken you under their wing and you still have a job to do. You may want to go to the boss of the seniors and express some of your general concerns. If you do that and then get fired months down the road, there will be many questions. If you stay quiet it might seem like you just don't care (which you clearly do)

Your best bet is to kill them with kindness and make source code contributions are as clean as they can possibly be so no one has anything to say. The less criticism you hear, the closer you get to being a senior developer yourself.

","13203","","13203","","2011-01-14 05:30:45","2011-01-14 05:30:45","","","","1","","","","CC BY-SA 2.5" "351054","2","","351050","2017-06-16 22:59:23","","5","","

Four points....

  1. Per Schwaber and Beedle, a Scrum task should take roughly 4 to 16 hours. Some complicated tasks can take longer if the team can't find a better way to break it down. So while your colleague is partly right, he is also partly wrong. Maybe buy him a copy of this book and ask him to read it (nicely).

  2. The task breakdown (including estimated hours) should be reviewed during the sprint kickoff, and the scrum master should be present and aware of the hours associated with each task. If he has a problem with the length of a particular task, he should speak up then, and not later. Make sure to call his attention to the estimates during the kickoff.

  3. Perception of progress should be managed by two things-- objectively, via burndown rate, and subjectively via demos. Not by watching tasks move across the board. If your sprints are longer than 2 weeks, it might make sense to schedule more demos, possibly weekly. If he can see progress via some other means, maybe he won't obsess so much over counting tasks.

  4. Even if the task isn't completed, completed/remaining hours should be updated on a daily basis. So if your manager looks at total remaining hours instead of total remaining tasks, maybe that will be satisfactory.

","115084","","115084","","2017-06-16 23:06:29","2017-06-16 23:06:29","","","","0","","","","CC BY-SA 3.0" "131552","2","","131548","2012-01-24 11:21:53","","4","","

This happens to everyone from time to time, no matter how experienced or prepared. Honesty is the best policy in these situations, which means acknowledging you don't have the answers. You'll be a lot more credible in the long run than if you try to bluff your way through it.

Specific questions:

(a) If you don't know the answer, instead of just shrugging your shoulders, the important thing is to assure people you understand what they're asking and that you can go away and find out. You may want to provide a timeline and some idea of the work involved if it's likely to require heavy lifting.

(b) If you haven't though about it, again, it's fine. Same as above, just let them know and get back to them.

(c) If you think the questions are irrelevant, you should generally speak up tactfully about why that's the case; it's quite possible they didn't realise it was irrelevant. If you're working with professionals, they'll accept it's irrelevant if you provide evidence/reasoning, and by checking with them, you might well discover it's relevant after all.

","38401","","","","","2012-01-24 11:21:53","","","","2","","","","CC BY-SA 3.0" "37397","2","","37339","2011-01-17 09:08:44","","117","","

Whilst no one posting here is in a position to tell you which to hire, I'd like to offer a little counterpoint to the proceedings...

One of our most recent new starters is the absolute image of professional experience.

In at 9, out at 5, one hour for lunch. No lates, no weekends.

Which probibly sounds terrible to most of the people who have responded so far.

However, not only is his code better (clean, concise, patterned, understandable, maintainable, test, on time!) than most any other team member, he is also an excelent sounding board for the passionate devs when they think they are about to solve all of our woes is a single deployment, a fountain of knowledge, and a voice sanity saving us from ourselves.

He knows how to push back against pushy management. He can spot scope creep a mile down the road. He writes more unit tests than anyone else. He doesn't b*tch and moan when he gets lumped with a boring task, and he'll probably still be here in 5 years time.

(To add to my first answer)

How do you know the passionate bloke is passionate other than the fact he told you?

He might be doing his best keen face because he so desperately needs the job, people will say most anything to get a job at the moment

He might think he's passionate about coding, but will the sheen start to tarnish when he realises 99% of us don't write sexy code.

Experience is quantifiable and provable.

Experience know that day-to-day, most of us work on non-sexy systems and dirty legacy code. And Experience shows that they can still drag themselves out of bed in the morning to deal with that.

I would like to reiterate I am not telling anyone who to hire. I do not think experience is better than passion or vice versa. I am not on a massive downer about people who are passionate about coding, but I find it a little worry to see the lack of balance being presented here. All of the other top voted answers here make very good valid arguments (Matthew Kubicina, User 9094, Otávio Décio, Bernard Dy) and I have voted them as such even if I have reservations about some of their opinions.

","9228","","9228","","2011-01-17 13:41:06","2011-01-17 13:41:06","","","","18","","","2011-01-17 17:57:02","CC BY-SA 2.5" "351464","2","","351408","2017-06-23 10:08:42","","2","","

I think this might be a controversial meta-answer, and I'm a bit late to the party, but I think it's very important to mention this here, because I think I know where you're coming from.

The problem with the way design patterns are used, is that when they are taught, they present a case like this:

You have this specific scenario. Organize your code this way. Here's a smart-looking, but somewhat contrived example.

The problem is that when you start doing real engineering, things are not quite this cut-and-dry. The design pattern you read about will not quite fit the problem you are trying to solve. Not to mention that the libraries you are using totally violate everything stated in the text explaining those patterns, each in its own special way. And as a result, the code you write ""feels wrong"" and you ask questions like this one.

In addition to this, I'd like to quote Andrei Alexandrescu, when talking about software engineering, who states:

Software engineering, maybe more than any other engineering discipline, exhibits a rich multiplicity: You can do the same thing in so many correct ways, and there are infinite nuances between right and wrong.

Perhaps this is a bit of an exaggeration, but I think this perfectly explains an additional reason why you might feel less confident in your code.

It is in times like this, that the prophetic voice of Mike Acton, game engine lead at Insomniac, screams in my head:

KNOW YOUR DATA

He's talking about the inputs to your program, and the desired outputs. And then there's this Fred Brooks gem from the Mythical Man Month:

Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.

So if I were you, I would reason about my problem based on my typical input case, and whether it achieves the desired correct output. And ask questions like this:

  • Is the output data from my program correct?
  • Is it produced efficiently/quickly for my most common input case?
  • Is my code easy enough to locally reason about, for both me and my teammates? If not, then can I make it simpler?

When you do that, the question of ""how many layers of abstraction or design patterns are needed"" becomes much easier to answer. How many layers of abstraction do you need? As many as necessary to achieve these goals, and not more. ""What about design patterns? I haven't used any!"" Well, if the above goals were achieved without direct application of a pattern, then that's fine. Make it work, and move on to the next problem. Start from your data, not from the code.

","103483","","103483","","2017-06-23 10:26:11","2017-06-23 10:26:11","","","","0","","","","CC BY-SA 3.0" "351513","2","","351509","2017-06-24 07:32:22","","10","","

Design should be planned for, either as enough extra story points per story or as a separate story. And you should still have designers on your team who own and monitor the design.

The trouble is that design represents ""no business value"" and pure user stories will typically be dumped onto ""the team"" which is this blob of developers that focusses on workable software rather than maintainable software. So with the introduction of scrum, conscious design often ceases to exist. For larger, long running products this is a desaster waiting to happen of course. It will be a silent one though, development time and the number of bugs coming back will just gradually go up. And no one will regard this as worrying, we will just do more testing and feel good about it. It is not a coincidence that testing has grown into a big thing lately, also in run-off-the-mill software projects.

Fortunately, today we also have a fancy name for this: ""technical debt"". It is a great name because it does not sound like a problem to anyone but the development team. It implies it is their fault too, the techies should pay for it, not the business. It is their problem.

And in all fairness, it really is. The development team must make sure they do design. They are self organising and must be strong enough to reserve the resourses to do what is necessary. They commit to a planning they make themselves. There will be pressure to move ahead with features but it remains their responsibility to think things through and to have people who are responsible and accountable for this. Scrum does make it harder to maintain these basics.

","209665","","","","","2017-06-24 07:32:22","","","","3","","","","CC BY-SA 3.0" "132230","2","","132226","2012-01-27 23:24:30","","8","","

To me, it doesn't seem that the role of a product owner goes against the idea of open source.

The ideas of open source software is that of freedom to learn, improve, change, and distribute in the best possible manners that solve real-world problems. It's all about collaboration to create a product that's available to the general public and that's usable, with a great sense of transparency and visibility.

The question you linked to discusses a product owner in the sense of Scrum - an individual who represents the voice of the customer/user for the development team. This is a person who ensures that the product is valuable by creating (or, in some cases, transforming into a usable form) and prioritizing requirements and defect reports.

There are different structures for running open-source projects, just like there are different development methodology and team structures in companies developing commercial software. I don't know of any open source teams that have a ""product owner"" role, but I can see it being useful in some cases.

The role of a product owner would probably be most useful on a project where the development team is disjoint (fully or partially) from the users of the software. Looking at open source software packages, things like GNUmed, Koha, and Tux Paint stand out - the target audience are people with vastly different backgrounds than software developers (although some users of the packages might have software development experience), and often have special needs or requirements that must be understood. Someone or some people in a role similar to product owner would be useful to ensure that the product is useful to the target audience.

If anything would have a more detailed, authoritative discussion of this, I would suspect it would be the writings of Eric S. Raymond. It sounds like something that might be discussed in The Cathedral and The Bazaar, or a similar essay by Raymond. However, it's been a while since I read these works, so I can't quote anything in particular.

","4","","","","","2012-01-27 23:24:30","","","","2","","","","CC BY-SA 3.0" "132357","2","","122191","2012-01-29 13:02:33","","89","","

Let's talk about cars.

Oh wait, we already did - remember that time we met, some time ago? We talked about cars. In fact, you seemed to be quite the expert on cars. You were able to explain, in detail, all of what's right, wrong, and exciting about the latest Formula 1 race. You knew by heart all of Lamborghini's models, including their price and availability. You even had thoughts of purchasing your own Ferrari 599 GTB Fiorano and were saving up for it (I bet the steak dinner didn't help much).

While explaining the faults of Toyota in a great, excited voice, you suddenly jumped from your chair and screamed into the air, waving your fists about: ""Damn it all, I'm a magnificent expert on all things related to cars! I'm going to be a car mechanic!""

And so you went. You had an interview, the Boss Man was just as impressed as I with your knowledge, and you were hired. The first client came in. His clutch was broken. You inspected it and didn't know what to do. As a matter of fact, you had absolutely no idea how to follow the advice the Boss Man gave you. You were fired.

But how could that be!? You know everything about cars! Except for ... everything about cars. You can very well know your dream car has a V12 engine, but you don't know what that actually means.

So you're not a car mechanic, really - you're a car enthusiast. And until you learn how cars work, you will remain an enthusiast.

Now let me ask you. How does $.fn.text work? And what about $.fn? What do they really mean? How does $(something) return a gigantic thingy containing things, and what is that thingy exactly? Can you replicate their functionality, at least a bit, in theory even? Can you cope without jQuery?

Saying that ""native JavaScript is hard"" is just ... false. First and foremost, because JavaScript as a language has nothing to do with the DOM, which is mainly what jQuery abstracts. Second because once you learn a bit about the DOM, you can already cruise through the most common cross-browser bugs. But just a little secret - everything is hard at first. Long division was a bitch in 5th grade.

As a second analogy for this answer: jQuery is to JavaScript-DOM (not JavaScript the language, just the DOM) like Array.prototype.forEach is to for. It works, for 99% of the cases. And it works well. But for that 1% which isn't covered, you need to know how to use the for loop, if only to be practical. This entire answer is based on the ""purer"" side of the question, and not even the technical side (the library's size, for example, and several other things as explained in Michael Dorrant's answer). Because I love JavaScript and when people seem to just throw it aside casually saying ""pah, those silly javascriptians"" and waving fancy white gloves, it gets down to morality.

If you can accept the fact that you'll always be a JavaScript enthusiast, then who am I to stop you? But if you want to be a JavaScript programmer, you first have to have the knowledge to at least choose between using jQuery (or any other library) and not using a library. Learn the DOM. Learn how to use it. Write your own small library or just some collection of helper functions. And once you are knowledgeable of the DOM, and you choose to use jQuery - godspeed. Laziness is awarded for those who worked hard.

","46310","","591","","2014-10-11 13:00:03","2014-10-11 13:00:03","","","","8","","","","CC BY-SA 3.0" "132576","1","132581","","2012-01-30 21:42:50","","19","10794","

How do you know how many programmers a particular project needs to be successful?

The company I work for fulfills orders for client companies. We have written an in-house warehouse management system that handles location based inventory management, order processing, bill-of-lading generation, invoicing, freight auditing and reporting (probably 50 reports). It also has barcode scanning functions and a client portal along with dozens of other smaller features. It also includes a full employee timeclock. It integrates with Quickbooks, UPS and FedEx. It handles work for at least 50 clients all differing slightly in their functionality. For example, we import orders from files the customers send but each customer sends a different file format (csv, excel, flat file and web services) so we have well over a dozen order conversion methods setup. Exports are the same story.

The project is complex and growing in complexity every day with over a quarter million lines of code. It's about 250,000 lines of VB.NET code, 6,200 lines of Ruby code and maybe 5,000 lines of PHP. It also has a MySQL database with about 200 tables.

Because of the constantly changing requirements and differing needs of dozens of clients the code itself varies greatly in the quality from extremely poor to relatively good code.

Currently, this project has only a single programmer - myself. I also currently do all the product support for our company of 75 people or so. That includes troubleshooting and setting up new clients and any new features that are needed. Plus, we're trying to rewrite the whole thing to be 100% Ruby on Rails based. And we would like to market the whole system within the next year or so to be used by other companies.

Currently, we have only myself as a programmer but I don't believe that is sufficient. Does anyone have any recommendations for how many programmers a project of this magnitude should have or how we should go about determining the answer to that question? Particularly given the fact that management would like the product to be commercial quality by next year?

","46410","","39178","","2012-01-30 22:42:14","2012-07-28 03:56:57","How to Determine # of Programmers needed for a project","","4","8","11","","","CC BY-SA 3.0" "351758","2","","351312","2017-06-28 11:23:23","","1","","

OOAD is about identifying entities and modeling real life objects or concepts to a degree of abstraction. You will perceive it as being easier if instead of writing classes yoy write interfaces at first, since you don't really have to implement them yet, yet the code compiles.

OOAD doesn't exclude the posibility of thinking in the system as big modules. You can still do that. Each module exists to satisfy a set of user stories (use cases). Such user stories needs classes that collaborate to fulfill them.

One key difference between procedural an OO approaches is that usually procedural thinking maps requirements to screens, whereas OOD has a tendency to think of what happens under the hood leaving the front end to be made by another person or in a non-OO fashion.

There's a technique in Extreme Programming called CRC Cards. CRC stands for ""class-responsibilities-collaborators"".

Basically you identify evident classes and assign a card to each one. Say, the class Invoice has its own card.

For every card-holding class you write what the responsabilities of that class would be, for example ""calculates a grand total"".

Also for every card-holding class, you write what other classes that class has to ask something for in order to fulfill its own responsabilities. Here you may discover Invoice needs the collaboration of InvoiceDetail or even discover that such a class is needed in the first place.

You may discover that some responsabilities you thought belonged to Invoce, really belong to one of its collaborators.

After the exercise, every card becomes a class, every responsability becomes a method and every collaboration relationship may become a composition, an agregation or a mere call.

That excercise can (and should) be done in a group, where even business people particpate.

You can learn more about this technique in these links:

http://en.wikipedia.org/wiki/Class-responsibility-collaboration_card

http://www.extremeprogramming.org/rules/crccards.html

Examples of CRC cards:

","61852","","","","","2017-06-28 11:23:23","","","","0","","","","CC BY-SA 3.0" "38468","2","","38463","2011-01-20 20:01:11","","2","","

In response to your edit, there are different sets of eyes to look at the situation. So to help clarify any potential confusion, it helps to understand which perspectives apply.

From the development team perspective, there is no difference between contractor and employee. We are all on the same team, and we all have the same goal. Adding and removing team members will have the same disruption whether they are employees or contractors. All team members have the same responsibilities.

From a management perspective, there is a difference. The company is trying to protect its most precious resource--employees. For that reason, the company will prefer to keep its employees over its contractors. If a contractor proves invaluable to the team, the company will likely attempt to convert the contractor to employee. These types of decisions live outside the day to day development process.

Agile processes are more concerned with the day to day development activities, and managing how you deliver a quality product. The agile processes are less concerned with management responsibilities such as hire/fire/contract decisions and more concerned with how we use the resources at hand.


Previous answer

It's not a fundamental contradiction, but it does present some training challenges. Agile processes foster a very natural mentoring environment. Essentially the staff programmers would end up always being the voice of experience--at least as it pertains to corporate culture and the specifics of how the team does agile.

Having a regular ebb and flow of contract programmers is going to present the same challenges whether you do agile or not. You have to educate the contract employee on how you do business--this includes development processes and billing. You have to educate the contract programmer on the current design of the system so that they can begin contributing as quickly as possible. The hope is that contract employees are quick studies, and can start contributing to the project really quickly. On-the-Job-Training (OJT) works pretty well here.

What it boils down to is that you will take an initial productivity hit when you hire new developers and contractors until they get up to speed. The more you do it, the more it negatively impacts your team's performance. Hense, the old adage ""Adding more developers to an already late project makes it later"". (I believe that was Fred Brooks, unless he was quoting someone else).

","6509","","6509","","2011-01-21 15:47:54","2011-01-21 15:47:54","","","","0","","","","CC BY-SA 2.5" "38696","2","","38590","2011-01-21 14:24:55","","19","","

Your First Step = Learn Your Craft

Experience is more important than book learning:

Pick a project and work out how to achieve your goals.

This will undoubtedly lead you into book-learning etc. but will enable you to gauge your own progress and to choose what to read and when. A few pointers:

  • Start with something small.
  • Take things one at a time.
  • Do things as well as you can.
  • Don't add things to your code until you need them.
  • Don't ever add code you don't understand.
  • Don't repeat the same code twice in your project.
  • Always imagine that someone else will be working on your code tomorrow - try to make it as clear to that person as you can.

As for your choice of books:

If you want to got the C# route, your book list is superb. If you get to know all that lot than you'll be worth your weight in gold! I've been a (fairly well) paid .Net programmer since the early days of .Net, but still haven't read the most advanced of these books (but they are on my reading list). The lesson I take from this is that the advanced stuff has its place, but mastery of the basics can still give you a great career. So, don't worry too much about the advanced books until you actually need them. There is one book I would add to your list - even before the advanced C# books: Code Complete 2. It is probably the most recommended book on this site. Deservedly so, IMO.

Your Next Step = Build Trust

You mentioned earning a little money. To state the obvious: to earn money from developing software, you need to find someone willing to pay you. Unfortunately for you, finding that someone is going to be a challenge for you.

Why?

  1. Because of your age.

I may be mistaken about this, as it is (of course) quite unreasonable. However, the sad reality is that people hold prejudices about age. In my experience, many potential employers are likely to turn you away because they consider young people unreliable and unable to deliver on their promises. What makes this particularly unfair is that you can't do anything about your age except wait.

However, there are things you can do to increase the likelihood of finding employment as a developer despite your age:

a) Keep at it. If you don't go looking for customers because you don't expect them to turn you away then you'll never find the ones who will look past your age and see your qualities as a developer. In other words, don't allow other people's prejudices about age become your prejudices about other people.

b) Get an advocate - someone who will vouch for your abilities who has more credibility in the eyes of prospects that you have in yourself. Perhaps you have an older friend or relative who can speak up for you? Of course, you'll need someone who can vouch for your personal qualities, so make sure you really are up to scratch technically.

  1. Because you don't have industry experience.

Despite the fact that you obviously have talent, knowledge and enthusiasm, you don't have 5 years experience on the job. This is a problem that faces everyone new to a profession no matter their age. Often, people don't want the bother of employing people who don't already have a proven track record at doing the job.

Fortunately, you can do a lot about this one:

a) Recognise that this is a reasonable concern

It is difficult for someone to justify paying for a service when they have no evidence that they will get what they pay for.

When you're talking to prospective clients, be honest about your lack of experience, but demonstrate why it won't a problem. If show the initiative in this then you can undermine their objections before they have thought them through properly. The benefit of this is not to manipulate, but to show that you understand their business needs.

b) Build a reputation

Do small, manageable packages of work for a small enough fee that you take the risk out of the transaction for the client. Often, this will mean that you do your first work for free. Choose these clients carefully - you need to do something that will give you satisfaction for someone who will sing your praises when you deliver. I'm told that many developers do charity work to get themselves started, but family and family friends might also be able to offer you something.

c) Build experience

To demonstrate experience you need... experience. If you can't find anyone else to work for, work for yourself. Start a hobby project. Pick something that people will find useful, and may (in time) be willing to pay for. Don't work on it for the money, however, but for the experience. Consider this a long-term investment - you can expect payback over the long haul, not necessarily in the short term.

d) Develop your non-technical skills

If the paid programming thing doesn't work out at the moment, don't worry. Employers don't really just pay for skill in a particular area, but for a complete package.

Non-technical skills are as important in the IT industry as technical skills: employers are looking for professionalism as well as programming ability. These professional qualities can include people skills, financial experience, business knowledge and personal qualities like honesty, reliability etc.

All these can be developed independently of your technical skills. For example, if you need the money you could take a non-programming job. Alternatively, you could get involved with a community group or charity or port or whatever where you can built upon your non-technical abilities. Ultimately, these activities may well lead to you landing your first proper programming contract, if not directly (you never know what contacts you'll make) then indirectly because you are more rounded and have more to offer than other people.

","1928","","611","","2011-01-24 09:10:01","2011-01-24 09:10:01","","","","3","","","","CC BY-SA 2.5" "352030","2","","176825","2017-07-02 12:44:27","","1","","

Not prefixes, but case. Consider using TurgidCapitalCaseNames() for 'opinionated' functions and brief_lower() names for basic 'raw' functions.

This presumes you can separate your application into 'raw' and 'opinionated' parts. Sort of like what Linux calls policy/mechanism. Then, your question of where to put the decision work to satisfy preconditions answers itself: In the opinionated functions.


    error_type  mkdir_and_parents(string path);
    Dir         MakeAnEmptyOutputDirectory(string subdir);

Mixing identifier case like this disgusts some people, but it might work for you?

By 'opinionated', I mean functions that would be:

  • cock-sure: deluded with the unquestionable reachability of the goal
  • expedient: taking short cuts to success, e.g. succeeding silently if the directory exists already
  • precocious: acquiring needed resources without a second thought, allocating memory, making parent directories
  • stubborn: e.g. retrying while the system responds with 'filesystem full'
  • robust: gets stuff done; might log warnings when someone else is at fault, but probably has a fallback answer

Your 'raw' functions could be quite the opposite:

  • subservient: don't do anything unless asked, and when asked, do only that
  • leaky: make no attempt to cover up the horror, should anything go wrong; Keen to hand problem back to someone else
  • humble: assume no control over any resource not expressly given
  • simple: only knows about a very small range of things

I doubt there is much of a middle ground. It would be very confused ground.

Aside: I think internal subsystem code and system interfaces are usually 'raw'-like. On the other hand I've found most customer-facing 'event response' code is usually very opinionated: eg key command handlers, view renderers, report generators, cron tasks.

Such a correlation might be because it is more important that world-facing parts of the system achieve a robust something instead of a fragile nothing. Maybe it reflects the human or business user's rough needs and expectations, versus the conservative, detail-oriented engineer-artisan's needs for dealing with complexity and diagnosing faults.

","276908","","","","","2017-07-02 12:44:27","","","","1","","","","CC BY-SA 3.0" "133183","2","","132952","2012-02-03 16:17:57","","22","","

The typical response to noisy conditions is to listen to music with headphones.

However, one of the really interesting studies quoted in Peopleware is the experiment done at Cornell -- they gave two groups a complicated task involving a long string of calculations. One group listened to music while performing the task, and one group had silence.

What they didn't tell either group is that the complicated string of calculations always returned the original number.

It turned out that not everyone figured this out, but of the people who did, a large majority came from the group that did not listen to music.

The theory apparently being is that listening to music is somehow engaging the part of the brain involved in creative thought, keeping it ""busy"" enough not to be able to look at the big picture of the task being performed.

Something to keep in mind the next time you plug in.

Look in the index under ""Cornell"" to find the reference.

","46717","","","","","2012-02-03 16:17:57","","","","2","","","","CC BY-SA 3.0" "39423","2","","39411","2008-09-10 19:00:27","","5","","

Estimating is hard. Working with NO estimates is a pipe-dream. So it's sort of a necessary evil.

  • Start by making it clear that initial estimates are way off so if you come to a X Unit estimate, give a range 0.6X - 1.6X.
  • Then there is the trick of doubling the estimate (however saying a ~2 year will take 4 years might be a problem)
  • (There's one more involving scaling the magniture and bumping to the next higher unit. Need to refer to my book for this one.)

Next comes the part of how do I reach that estimate so that I can double/multiply? No easy answer.

  • Historical trends make sense if you're doing the same kind of project with the same level of team skills.
  • Next is the 'Let me steer the ship for 2-3 iterations' and then I'll get back with a better estimate.
  • Finally comes the ""Tell me how much it will cost.. Till then no one works on this."" In this case, you have no guardian angel. Sit and break it down and then apply piecemeal guesses till you have a cumulative estimate. Then do the double-n-bump.

In my 5 years of living in the trenches.. The only thing I've seen is 'guessing by the seat of your pants' estimation... No FP.. No process.. helps in real-world constraints. Still things can improve.. for that I'd suggest getting Mike Cohn's Book 'Agile Estimating and Planning' - that's the book that is closest to us sinful programmers and reality. The rest are preachers on their high horses... Caper Jones which planet is your abode?

Nice question.. I keep asking this when every project begins and then once every week i'm doing the GRIND. Still no answer from the voices in my head.

","21049","Gishu","","","","2008-09-10 19:48:37","","","","2","","","2011-01-24 17:34:45","CC BY-SA 2.5" "39461","2","","39449","2009-02-16 12:48:16","","55","","

Long reply, but hey, I’ve got a summary on the end, so just skip to summary if you can’t be bothered reading the entire thing!

As a developer I had to deal with the situation literally every other project, but it's not until I moved into project management that I learned how to deal with it effectively. For me dealing effectively is about two things: managing expectations and understanding how estimation works.

Start with a premise that it is unethical to provide an estimate, commit to an estimate or give any other indication of estimate accuracy without being able to carry some due diligence first. Other people rely on your professional ability to predict an amount of work required, giving a false indication will hurt them and their business.

But you have to give something, in real life you dragged into an impromptu meeting or a late project and your superior will probably make it clear they expect you to come up with some figure straight away or comment on the estimate they provided. This is where expectations management comes into play.

Explain that it would be wrong of you to give any figure or any indication without understanding the problem and working the numbers out for yourself. Say that their figures might be quite correct, you just don’t know before you went through the estimation exercise yourself. And even though you might have a good picture of what is needed there and when, say that you still need some time to work the numbers out. There is only one estimate they might expect you to give: when you going to be able to provide an estimate. By all means do provide that figure.

As a developer never take responsibility for (or give indication that can be interpreted as acceptance of) other people estimates without being able to review them first. As a project manager it is a totally different matter, because then you actually have some control over the estimation process: the way an estimate is derived and reviewed and you have to rely on other people to get the actual work done and you need to make sure they are committed.

Never even comment on estimates without being able to do the due diligence. This is ethical. A lawyer or a doctor will make it absolutely clear they cannot give any advice unless a client (or patient) plays by their rules and goes through an assessment procedure first. You similarly have a right to satisfy your questions before giving professional opinion.

The second part is about how estimation works. I suggest researching various approaches to doing estimates and how estimation process works, including industries other than software development (manufacturing, financial markets, construction). This will give you idea what can be reasonably expected from you by your boss or client and, strangely, will help making more accurate predictions about the amount of work. It will improve your ability to defend your estimates and you will need to defend the figures every time they are different from the ones provided by an architect or a sales person.

Normally, the way it works, is that your estimate is first scanned for odd looking or relatively large items. Hence be prepared to defend anything with “non-standard” name. Also split larger tasks so that all tasks have same order of magnitude, i.e. if most tasks take 2 days and one single task is 10 days be prepared to get drilled.

Be clear about what is included in each task, its best to split dev and unit testing instead of just having dev and having someone assume that it includes documentation as well. Obviously this way you’ll need to produce a fairly fine grained estimate.

Next the drilling comes. Since it is quite difficult to review a long work breakdown your client or boss is likely to adopt a different strategy: concentrate on a random bit they might know something about and drill down until they manage to discredit the entire estimate or will be satisfied with your answers. The credibility of entire estimate might depend on a random sample! Hence once again, you need time to prepare it carefully, include only relevant bits, exclude any extras or move them to a “nice to haves” section and think through how you going to defend the figures.

Obviously you can be either consistent in your approach, i.e. estimating on the basis of features, number of screens etc or have a mix of approaches, but in any case be prepared to defend why you selected a certain way of estimation. Be also prepared to explain why your figures are different from whoever else’s attempt at predicting the amount of work required.

Learn the obvious signs of weak estimates:

  • Filled with general run-of-the-mill tasks, copied from template (good estimates are specific to the task at hand).

  • Coarse grained estimates (i.e. tasks longer than couple of days).

  • Estimates done on early stage of a project or by someone who might not have actual knowledge of the requirements or work involved.

  • Estimates compiled by people other than actual doers

  • Vague estimates (not clear what is included and, equally important, excluded).

  • Substantial difference in the order of task magnitudes.

Practise in evaluating other people estimates and drilling the figures without actual knowledge of implementation detail. This will help to back your claim for some extra time when pressed with the request to confirm someone else’s estimate when you have no hard evidence.

To summarise:

  • Do not commit to an awful or any estimate for that matter, before you had an opportunity to do due diligence.

  • Make it clear on the outset, don’t let anyone assume it is any other way and interpret your silence as a sign of agreement.

  • Know how various estimation methods work, their practical application and merits, including these outside software development.

  • Be prepared to defend your estimate.

  • Learn how to evaluate other people estimates so you don’t have to commit yourself to vastly inaccurate figures.

","12348","Totophil","","","","2009-02-16 12:56:25","","","","2","","","2011-02-14 17:15:48","CC BY-SA 2.5" "352345","2","","352341","2017-07-07 18:24:52","","16","","

You're right that Kanban doesn't have the concept of Sprints or Sprint Planning like Scrum does. That's because it's a leaner methodology. More things are done just-in-time.

It's up to you to decide how to schedule planning activities, but I would recommend doing them as close to the start of work as possible. This is most effective when there are representatives of all of the major stakeholders embedded on the team (the same also makes Scrum more effective).

I think that this diagram, based around Disciplined Agile Delivery, gives a good pictorial representation of a lean software process:

The events of the Daily Standup and Sprint Planning are captured across the Coordination Meeting and Replenishment Modeling Session. Coordination Meeting is more like a Daily Standup from Scrum and a Replenishment Modeling Session is more like Backlog Refinement and Sprint Planning. However, it's OK to bring in some requirements discussion into the Coordination Meeting if that works for your team.

Like most things in a lean process, this happen just-in-time. There are no timeboxes and events don't happen on a particular cadence like they do in Scrum. You do the work when it adds the value for the team and stakeholders.

Which you can compare to a pictorial representation of a process based on Scrum modeled in the context of Disciplined Agile Delivery:

Instead of constraining yourself to 2-4 week Sprints with planning at the start, daily stand-ups, and a review and retrospective at the end, leaner methodologies will enact your demonstration, coordination, and retrospective meetings whenever the stakeholders think that it is appropriate.

Kanban will provide guidance for managing your backlog of work and work-in-progress (WIP). You can turn to other techniques and methods for the exact implementation of other activities since Kanban is generally silent on those.

","4","","4","","2020-06-12 09:44:18","2020-06-12 09:44:18","","","","3","","","","CC BY-SA 4.0" "39817","2","","39771","2011-01-25 13:08:38","","13","","

A lot of the debate about (System) Hungarian Notation depends on the area of work. I used to be very firmly on the side of ""no way!"", but having worked for a few years at a company where it is used for embedded development I can see some advantages of it in certain applications and it has definitely grown on me.

Systems Hungarian

From what I can tell, Systems Hungarian tends to be used a lot in the embedded field. In PC applications, the compiler will deal with a lot of the issues associated with the differences between (e.g.) strings, integers and floating point values. On a deeply embedded platform, you're more often concerned about the differences between 8-bit unsigned integers, 16-bit signed integers etc. The compiler (or even lint with MISRA rules enforced) doesn't always pick up on these. In this case having variable names like u8ReadIndex, s16ADCValue can be helpful.

Apps Hungarian

Apps Hungarian has definite advantages when it comes to PC/Web applications, for example offering a visual difference between 'unsafe' strings and 'safe' strings (i.e. those entered by a user and those that have been escaped or read from internal resources or whatever). The compiler has no knowledge of this distinction.

What's the Point?

Use of (Systems or Apps) Hungarian is all about making wrong code look wrong.

If you're copying an unsafe string straight into a safe string without some escaping, it'll look wrong if you use Apps Hungarian.

If you're multiplying a signed integer by an unsigned integer, the compiler will (often silently) promote the signed one to (a possible enormous) unsigned one, possibly resulting in an error: Systems Hungarian makes this look wrong.

In both of these situations the (Apps/Systems) Hungarian notation tends to make formal code reviews quicker as there is less referring back to the type of the variable.

Overall

Overall, my opinion is that the most important thing is that you have a coding standard. Whether that uses Systems Hungarian, Apps Hungarian or neither is a matter of personal or group preference, much like choice of indentation types etc. However, there are definite advantages to the whole development team working to the same preference.

","5814","","","","","2011-01-25 13:08:38","","","","1","","","","CC BY-SA 2.5" "352589","2","","287037","2017-07-11 20:00:40","","4","","

To add a different voice:

Yes, if you work as you describe, there is no problem with using rebase and force-pushing. The critical point is: You have an agreement in your team.

The warnings about rebase and friends are for the case where there is no special agreement. Normally, git protects you automatically, by, for example, not allowing a non-fast-forward push. Using rebase and force-pushing overrides that safety. That's ok if you have something else in place.

On my team, we also sometimes rebase feature branches if history has become messy. We either delete the old branch and make a new one with a new name, or we just coordinate informally (which is possible because we are a small team, and no one else works in our repository).


Note that the git project itself also does something similar: They have an integration branch that is regularly recreated, called ""pu"" (""proposed updates""). It is documented that this branch is recreated regularly, and that you should not base new work on it.

","12248","","","","","2017-07-11 20:00:40","","","","0","","","","CC BY-SA 3.0" "133635","2","","133502","2012-02-06 22:33:31","","1","","

My company although not implementing a 80/20 rule as such encourages us to consistently keep up-to date with the latest technology, read other blogs and posts such as programmers etc and basically make sure we are keeping our personal development and interests up-to date.

To achieve this they didn't specify an exact amount of time, but made sure that we work on a project and task deadline rather than a by the hour workload. This has meant that we consistently talk with our immediate manager to ensure the tasks assigned to us or we take on are manageable and allow us some time outside of the normal to take for ourselves and refresh our mind and interests. If we are struggling we talk about it to find out why i.e. skill set limitation, too much work, unrealistic deadlines etc

We do keep a rough track of our times spent for billing purposes but this is to the hour on an entire day basis so we are not expected to record what we have done during every minute of the day.

Each of us in the team manages their time themselves so it's up-to them to determine how they do this. Some members of the team work hard at the begging of the week to get their tasks done and any time left over they relax into their own interests. Others like me tend to mix this in during the week as I quite often hit road blocks in my project / task. So at this point I jump over to something else which is quite often my own interest. However as we are task/project driven I still have to make sure I get the job done so it is my responsibility to make sure I don't use all my time on personal tasks at the detriment of the team.

This has in the past lead to some team members rushing their tasks and producing undesirable results. To help with this we introduced periodic peer reviews and also encourage everyone to read other developers code check ins. We encourage an open discussion forum where everyone is free to voice their opinion albeit in a respectful manner.

In the end it came down to a bit of trust from the powers that be, taking ownership by the grunts to ensure the work is done and a good manager in the middle to keep the ship running.

","27796","","","","","2012-02-06 22:33:31","","","","0","","","","CC BY-SA 3.0" "352709","2","","352702","2017-07-13 13:06:38","","15","","

I would like to understand better the implications of using such a paradigm in a project:

  • Is the premise to the problem correct? or Did I missed something relevant?
  • Is the solution a good architectural idea? or the price is too high?

Your approach brings some large problems into your source code:

  • it relies on the client code always remembering to check the value of s. This is common with the use return codes for error handling approach, and one of the reasons that exceptions were introduced into the language: with exceptions, if you fail, you do not fail silently.

  • the more code you write with this approach, the more error boilerplate code you will have to add as well, for error handling (your code is no longer minimalistic) and your maintenance effort goes up.

But my several years as a developer make me to see the problem from a different approach:

The solutions for these problems should be approached at technical lead level or team level:

Programmers tend to simplify their task by throwing exceptions when the specific case seem too rare to be implemented carefully. Typical cases of this are: out of memory issues, disk full issues, corrupted file issues, etc. This might be sufficient, but is not always decided from an architectural level.

If you find yourself handling every type of exception that may be thrown, all the time, then the design is not good; What errors get handled, should be decided according to the specifications for the project, not according to what devs feel like implementing.

Address by setting up automated testing, separating specification of the unit tests and implementation (have two different persons do this).

Programmers tends not reading carefully documentation [...] Furthermore, even when they know, they don't really manage them.

You will not address this by writing more code. I think your best bet is meticulously-applied code reviews.

Programmers tends not catching exceptions early enough, and when they do, it is mostly to log and throw further. (refer to first point).

Proper error handling is hard, but less tedious with exceptions than with return values (whether they are actually returned or passed as i/o arguments).

The most tricky part of error handling is not how you receive the error, but how to make sure your application keeps a consistent state in the presence of errors.

To address this, more attention needs to be allocated to identifying and running in error conditions (more testing, more unit/integration tests, etc).

","12617","","","","","2017-07-13 13:06:38","","","","16","","","","CC BY-SA 3.0" "134182","2","","134176","2012-02-09 13:47:36","","2","","

Basing your efforts on the CMMI is probably a good idea, even if you don't undergo the appraisals and get formally audited and rated. There's plenty of literature available about the CMMI, CMMI and other process improvement techniques such as Lean and Six Sigma, and CMMI and agile software development. The SEI has an entire collection of resources, some available for free, about different aspects of CMMI and guidance for different types of organizations.

I'd recommend looking in great depth at the continuous approach to implementing CMMI, rather than the staged approach. It strikes me as a much more efficient way to determine exactly where your organization stands now and improve in areas that add the most business value. This will allow you to not only align your improvement efforts with business objectives, but quickly achieve progress milestones and demonstrate the effects of improvement, increasing buy-in from all levels.

Something to keep in mind, though, is that process improvement is generally more successful when it's a grassroots effort. When process changes are dictated from above - by people the developers ""in the trenches"" might see as being out of touch with how things are done in the trenches - there is probably going to be pushback, even if the idea is a good one. Be prepared for this.

Some type of engineering process group might also be beneficial. Bring together representatives of the various organizational components and teams impacted by the improvement so that everyone's voice is heard. This would include not just representatives of each role, but perhaps various product development teams. Without knowing how your organization is structured, I can't say exactly who you might want to look at, but include people from every level of the organization in the group. Also, make the discussions and decisions made by this group available to the organization for comments and raising of any problems.

","4","","4","","2012-02-09 16:20:11","2012-02-09 16:20:11","","","","6","","","","CC BY-SA 3.0" "41019","1","","","2011-01-28 21:05:10","","9","1414","

I just joined a (relatively) small development team that's been working on a project for several months, if not a year. As with most developer joining a project, I spent my first couple of days reviewing the project's codebase.

The project (a medium- to large-sized ASP.NET WebForms internal line of business application) is, for lack of a more descriptive term, a disaster. There are three immediately noticeable problems with the coding standards:

  1. The standard is very loose. It describes more of what not to do (don't use Hungarian notation, etc..) than what to do.
  2. The standard isn't always followed. There are inconsistencies with the code formatting everywhere.
  3. The standard doesn't follow Microsoft's style guidelines. In my opinion, there's no value in deviating from the guidelines that were set forth by the developer of the framework and the largest contributor to the language specification.

As for point 3, perhaps it bothers me more because I've taken the time to get my MCPD with a focus on web applications (specifically, ASP.NET). I'm also the only Microsoft Certified Professional on the team. Because of what I learned in all of my schooling, self-teaching, and on-the-job learning (including my preparation for the certification exams) I've also spotted several instances in the project's code where things are simply not done in the best way.

I've only been on this team for a week, but I see so many issues with their codebase that I imagine I'll be spending more time fighting with what's already written to do things in ""their way"" than I would if I were working on a project that, for example, followed more widely accepted coding standards, architecture patterns, and best practices. This brings me to my question:

Should I (and if so, how do I) propose to my project manager and team lead that the project needs to be majorly renovated?

I don't want to walk into their office, waving my MCTS and MCPD certificates around, saying that their project's codebase is crap. But I also don't want to have to stay silent and have to write kludgey code atop their kludgey code, because I actually want to write quality software and I want the end product to be stable and easily maintainable.

","11438","","","","","2011-01-29 04:10:07","How do I (tactfully) tell my project manager or lead developer that the project's codebase needs serious work?","","14","7","4","","","CC BY-SA 2.5" "134632","2","","134118","2012-02-12 08:11:26","","13","","

For some insight to why these operators are in the 'C-style' languages to begin with, there's this excerpt from K&R 1st Edition (1978), 34 years ago:

Quite apart from conciseness, assignment operators have the advantage that they correspond better to the way people think. We say ""add 2 to i"" or ""increment i by 2,"" not ""take i, add 2, then put the result back in i."" Thus i += 2. In addition, for a complicated expression like

yyval[yypv[p3+p4] + yypv[p1+p2]] += 2

the assignment operator makes the code easier to understand, since the reader doesn't have to check painstakingly that two long expressions are indeed the same, or wonder why they're not. And an assignment operator may even help the compiler to produce more efficient code.

I think it's clear from this passage that Brian Kernighan and Dennis Ritchie (K&R), believed that compound assignment operators helped with code readability.

It's been a long time since K&R wrote that, and a lot of the 'best practices' about how people should write code has changed or evolved since then. But this programmers.stackexchange question is the first time I can recall someone voicing a complaint about the readability of compound assignments, so I wonder if many programmers find them to be a problem? Then again, as I type this the question has 95 upvotes, so maybe people do find them jarring when reading code.

","24029","","591","","2015-07-19 12:57:16","2015-07-19 12:57:16","","","","0","","","2012-02-12 08:11:26","CC BY-SA 3.0" "41450","2","","41409","2011-01-30 16:57:23","","25","","

Test Driven Design works for me for the following reasons:

It is a runnable form of the specification.

This mean that you can see from the test cases:

  1. THAT the code being called full-fills the specification as the results expected are right there in the test cases. Visual inspection (which expects the test cases to pass) can say immediately ""oh, this test checks that calling invoiceCompany given this situation, should have THAT result"".
  2. HOW the code should be called. The actual steps needed to do the tests are specified directly without any external scaffolding (databases are mocked out etc).

You write the view from the outside first.

Frequently code is written in a way where you first solve the problem, and then you think of how the code you just wrote is to be called. This frequently give an awkward interface because it is frequently easier to ""just add a flag"" etc. By thinking the ""we need to do THIS so the testcases will look like THAT"" up front you turn this around. This will give better modularity, as the code will be written according to the calling interface, not the other way around.

This will usually result in cleaner code too which require less explanatory documentation.

You get done faster

Since you have the specification on runnable form, you are done when the full test suite passes. You may add more tests as you clarify things on a more detailed level, but as a basic principle you have a very clear and visible indicator of progress and when you are done.

This means that you can tell when work is necessary or not (does it help pass a test) you end up needing to do less.

For those pondering on it may be useful to them, I encourage you to use TDD for your next library routine. Slowly set up a runnable specification, and make the code pass the tests. When done, the runnable specification is available to everyone who needs to see how to invoke the library.

Recent Study

""The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15 to 35% increase in initial development time after adopting TDD."" ~ Results and Experiences of 4 Industrial Teams

","","user1249","16090","","2016-02-18 23:31:15","2016-02-18 23:31:15","","","","2","","","","CC BY-SA 3.0" "41737","2","","41732","2011-01-31 09:10:01","","1","","

As @James noted, face to face communication is very, very important. We are in the same situation, with members of the team scattered around the globe, currently in 3 different locations, with a total timezone difference of 5,5 hours.

We just recently got a new teammate and it took some time before the first videoconference meeting with him. It made a big difference for me to be able to associate a face with the voice.

We visit each other physically at more or less regular intervals (our team lead 3-4 times a year, us developers about once a year), for a couple of days to a week each time. Of course, upon these occasions we also arrange common lunch/dinner. This definitely helps team bonding, although it is still far from working in the same office all the time.

We also do our daily stand-up meeting (Scrum style) via conference call; it's a bit awkward, still it helps keeping the team together.

","14221","","14221","","2011-01-31 09:16:01","2011-01-31 09:16:01","","","","0","","","","CC BY-SA 2.5" "238492","2","","238481","2014-05-09 08:01:53","","8","","

The Product Owner does not have an active role in the daily stand-up meeting in Scrum. He/she can listen in to get a sense for what is going on in the team and how they are doing on their commitment.
The Product Owner should remain in the background during these meetings and not speak up. If the Product Owner has grave concerns, the he/she should take that up with the Scrum Master after the meeting.
If the Product Owner is present, it is possible they get asked for some clarification on some of the stories. Even that should be postponed until after the meeting in order to avoid draw out discussions during the stand-up.

If the actual customer can't be present often enough to take on the role of Product Owner, then a proxy can be nominated. The most important characteristics for the proxy are:

  • They have sufficient insight in what the customer actually wants to convey that vision to the team
  • They have enough authority to make decisions on the spot and to defend those decisions both to the team and to the customer.

Especially the second point usually means that the proxy must be either a very senior developer or someone from management circles.


I am not familiar with the term ""External Stakeholders Engagement"" (which is not a term used in Scrum), so I can't say definitively how it relates to a Product Owner, but at first glance the aim seems to be the same.

","5099","","5099","","2016-02-23 16:59:01","2016-02-23 16:59:01","","","","9","","","","CC BY-SA 3.0" "42626","2","","42181","2011-02-02 00:08:38","","2","","

The number 1 tool I've found that I, as a tester (SDET), can leverage to improve dev-test relationships is honest flattery, especially in the form of seeking mentorship from devs.

Hopefully, the developers I work with are betters developers than I am. They aren't perfect, or I wouldn't have a job, but there are a lot of things they know better than I do. They've been pure development, while my attention is partially focused on testing. I note those things that they do better, and I mention them frequently. When I read their code, I note elegant details or neat uses of design patterns and bring those up in conversation. I imitate the developers, using similar coding conventions when possible, and integrating components from production into my test tools when appropriate (e.g., logging). I recognize their expertise, and as a result they are happy to acknowledge mine. Mind you, if I think there is a better way to do things then I absolutely do speak up - but I aim to give more positive feedback than negative, overall. Generally, I try to make negative feedback more formal and impersonal (e.g., bug reports) and positive feedback less formal and more personal (e.g., conversations in person).

Giving positive feedback about quality as well as negative feedback and asking for advice changes the relationship from being contentious to being about teamwork and mutual learning and lowers defensiveness. The developers know they can trust me to always say more good things than bad, so they feel comfortable listening to me. Also, asking insightful questions about development raises their opinion of me and breaks through the ""SDETs are failed devs"" stereotype (where it still exists).

","8888","","","","","2011-02-02 00:08:38","","","","0","","","","CC BY-SA 2.5" "238792","2","","238478","2014-05-12 10:19:34","","5","","

My suggestion is to use paper / cards & sounds.

Have everyone make their estimates with their cards / paper. Then everyone shows their estimate at the same time.

At this point the visual members look around and you even hear folks say, hmm, 5,5, 2, hey bob, what's up with your 2? etc

So expand a little more on that verbal part.

Have everyone make their estimates, show their card / paper and very quickly have everyone (in turn) say their numbers. Do that one-by-one (but quickly) so within 2-3 seconds you hear 5,2,5,4,5,5. The visually impaired person will then know the range of values and also who's voice goes with with ticket which is probably essential (and also helps avoid the need for everyone to sit in the same spot each time).

Similarly if folks change their estimates during discussion make sure they verbalize any changes.

To avoid the all-critical 'influenced by' consider having members write down their initial choices on scraps of paper (or use playing cards). Folks would choose the intended card initially and put the others down. Then, when the 'going around' is done people would hold up their initially selected and only card AND say the points at that moment.

This is pretty close to what you are doing, the main change being to have each member verbalize their choice themselves, rather than you reading them all ""for Bob"" - which makes the ""read for Bob"" a distinct process which is not good socially for Bob. It singles him out as being different and having special needs.

Try to make these techniques integral to your flow so that an outsider wouldn't notice differences easily. Avoid any sort of ""now lets 'say' our choices' or 'whoops I forgot about Bob'. You can forget but if you do just say the number without discussion and apology and move on immediately.

Braille cards are good but remember that they make ""Bob""'s impairment stand out, new/visiting team members have to be coached, Bob can't work on other scrum teams without them learning how the cards will be used. All of which is a lot of focus for Bob which is one of the things you want to avoid for someone with special needs. This is why I think the verbal approach avoid much of that.

","34069","","34069","","2014-11-04 12:32:47","2014-11-04 12:32:47","","","","3","","","","CC BY-SA 3.0" "355643","2","","355641","2017-08-14 16:00:56","","2","","

Coupling, like everything else in computing, is a tradeoff. You want loose coupling when it will benefit you. The coupling that occurs between your application and the .NET framework, or packages downloaded via Nuget, is some of the tightest coupling there is.

In line-of-business applications, loose coupling comes into play when you want to go beyond simple CRUD operations to actual business processes. Absent that requirement, you might as well just use the generated DTO's that Entity Framework provides. Entity Framework even gives you ""unit of work"" capabilities for free.

Consider, for example, an invoice. An invoice is unavoidably a business artifact. You don't store customers or products in invoices; rather, you look those things up from a database, and then reference them by ID. Your business layer translates between business objects (the invoice) and CRUD operations (people and things; entities). In so doing, it typically makes a ViewModel object that contains all of the entities required for the invoice. That's' where your loose coupling comes into play.

","1204","","","","","2017-08-14 16:00:56","","","","1","","","","CC BY-SA 3.0" "43728","2","","43725","2011-02-04 15:18:41","","2","","

I'd first strongly recommend at least one face-to-face meeting for EVERYone. I know this can get expensive and difficult, but it can make a huge difference when everyone actually gets together and hangs out for a while. Have asmall work-relatd meeting than a large social meeting. Drinks, dinner, sports, something people can bond over. If it's popular try to make it a yearly thing to give people something to look forward to.

Also try to encourage more conference calls. If people at least can be in on the same discussion at the same time they'll feel closer and more a part of the same group. Maybe even just one status meeting a week as a conference call. And conferencing can be done in IM chats, doesn't even have to be voice calls.

","13156","","","","","2011-02-04 15:18:41","","","","0","","","","CC BY-SA 2.5" "355686","1","355691","","2017-08-15 08:46:56","","111","27696","

Backstory:
I have been working as part of this team for the past three years and in this time we have had three different Scrum Master who have all run things differently.

Because of this change in Scrum Masters and their way of running the show, it has left my team numb to the idea of Scrum because the principles haven't been enforced consistently and one of the Scrum Masters was a person who do not believe in agile development and just kept events and artifacts as a novice to comply with company decisions.

Now my team members are annoyed and bored when we do Scrum events and one person in particular is very verbal about this.

Present:
Two months ago the company appointed me Scrum Master of my team because of my dedication to working agile and its principles.

I'm suffering greatly under the atmospheric pressure of my team members unwillingness to do Scrum.

As mentioned they are annoyed about the entire process which makes it very difficult for me because they are not engaging in the necessary conversations needed to make Planning, Retrospective, and Daily Scrum effective.

To them, Planning is just a waste of time, because we just move overflow into the new Sprint and don't complete the work anyways, so why bother.

During Retrospective I can just feel that they want to say ""Stop doing Scrum"". One person does, but the others are silent and I have to deal with this every time.

Daily Scrum is again just a waste of time for them because none of them bothers to talk and plan the day. They just state ""I worked on task X yesterday and will work on that again today."" And most of the time they just joke around until I get more stern.

I have been very large when it comes to how they spent their time during these events. But I'm dying on the inside because I have a passion for this and they don't care anymore.

Today the person who's always against me told me to stop saying ""They said this is what they committed to for this Sprint"" because, in his words, ""We never complete a Sprint. We just move in tasks and take in new ones in the next Sprint to fill up a quota. We do KanBan in reality. So stop saying that.""

I understand why he says this, but he doesn't seem to realize that this is how it is because him and everybody else on the team don't care. They just do work instead of dealing with impediments. They complain about the impediments, but don't do anything about them. And when I try to help they just shrug it off.

They used to give a damn, but over the past two years their willingness has declined to more or less rock bottom.

How can I make them see that joking around and wasting time during these meetings costs the company a lot of money?

","","user42401","4","","2017-08-17 19:02:22","2018-05-29 09:53:30","How do I deal with a counterproductive scrum team?","","14","27","31","","","CC BY-SA 3.0" "355710","2","","355703","2017-08-15 15:44:05","","6","","

Is there a general requirement discovering in SCRUM? Like interviews, surveys with the client and then those stack of requirements are shared with the team in the multiple sprint meetings for the backlog or it's a continue discovering with the team and the stakeholders (users stories in the sprint meetings)

Scrum is silent on the specific methods.

The Product Owner role is responsible for ensuring that the items in the Product Backlog are clearly expressed in a manner that is useful for the Development Team, ensures that the order of Product Backlog items is managed, and ensuring that all stakeholders are clear as to what the team is working on now and will be working on next.

Nothing in the Scrum framework tells the Product Owner how to do these things. The only thing that is the Product Owner is one person. There may be a team of analysts or product managers, but there is a singular person (although I recommend also identifying an alternate) that represents the stakeholders and product vision to the team.

Part of ensuring that the Product Backlog is in a good state is discovering and capturing requirements. But this can be done in any way that is comfortable for both the stakeholders and the product owner.

I have to create a design procedure, which will describe how to approach UI design (mockups, prototype, etc), in SCRUM is not up to the team how they do it? An analyst will follow this procedures and after a view of what the project is going to be share it with the team to begin the implementation, though removing the responsibility of recreating the entire development process for each application without an specific methodology.

Something that may help here is the concept of ""Definition of Ready"". The Scrum framework defines a ""Definition of Done"". The Definition of Done helps every team to make sure that the work is fully complete. However, some teams also develop a concept of ""Definition of Ready"" - the work that needs to be done before the backlog item can be started by the development team.

As far as the team choosing their own work approach, that is a principle of Scrum - the team can inspect the way they are doing work and adapt. However, it is OK to put restrictions on the way the team works, if it comes from outside guidance. It's also possible to perform ""process tailoring"", even if you have documented processes that correspond to ISO requirements. You just need to fully understand the intention of each ISO requirement and document organizational standards.


As an aside, I would recommend looking at Disciplined Agile. One of the process life cycle models implemented in Disciplined Agile is based heavily on Scrum. Other models are based on other agile and lean methodologies. However, unlike Scrum, it does address concerns like governance (ISO standards, CMMI, regulatory requirements) and how to implement agile methods in these environments.

","4","","4","","2020-06-12 09:44:38","2020-06-12 09:44:38","","","","0","","","","CC BY-SA 4.0" "355734","2","","355686","2017-08-15 21:35:36","","47","","

Ok so let's start rough - big part of the problem is with you - You hear, but you don't listen. Your team is telling you clearly what the problems are. You need to address them instead of blaming your team.

Planning

To them, Planning is just a waste of time, because we just move overflow into the new Sprint and don't complete the work anyways, so why bother.

Exactly. If you consistently fail to allocate correct amount of time to tasks, and they are consistently underestimated, it has very negative effects:

  • Developers feel like they are constantly under pressure.
  • ""I can't get anything done in time"".
  • Since the process does not work, they rightfully see it as a waste of time.

Solution: Fix your estimations using combination of:

  • Story Points (as a combination of Time and Risk).
  • Do not allow tasks into a sprint that are > 55 SP
  • Comparative Estimations
  • Evidence Based Scheduling

As a base for this, you absolutely need to track time it actually took to finish previous tasks, this includes testing, writing documentation, writing tests, end user training, integration efforts, deployment. etc.

Once you have a total time for a given task, you can base expected time on those previous tasks.

Ask every member if the task given to them feels more complicated or easier than the selection of previous tasks, adjust number of allocated tasks based on that.

If you haven't used SP before, my advice is to start with 1h of real honest to god work = 5SP as a guideline. Keep in mind that in usual development environment, you'll get maybe 6 of those per day, so 30SP / day max. Never ever allow for a task that takes more than 2 days to get on the board. Ideally, in my experience, you should have 2 tasks per day.

If you don't do Planning correctly, rest of your Scrum activities will look like a waste of time (including Planning).

Retrospective

During Retrospective I can just feel that they want to say ""Stop doing Scrum"". One person does, but the others are silent and I have to deal with this every time.

Reminds me of Daily beatings will continue until morale improves! and two of the past jobs. If you don't remove impediments, then they are correct that this is a waste of time.

Again, listen to what people are actually saying. If the complaints raised during the retrospective are not addressed, why bother doing them at all?

So:

  • Consider Six Thinking Hats techniques to improve the communication.
  • Reduce the time spent on Retrospective, 30 mins maximum.
  • Ensure that complaints raised during the Retrospective are addressed before the next one.

Daily SCRUMs

Daily Scrum is again just a waste of time for them because none of them bothers to talk and plan the day. They just state ""I worked on task X yesterday and will work on that again today."" And most of the time they just joke around until I get more stern.

Sounds like you have two problems here: SCRUM meetings are too long, and your planning and task creation sucks.

Both can make sound like a scrum meeting is a waste of time.

For the SCRUM length:

  • Try 15 mins maximum.
  • Try everyone standing up.
  • Fixed formula:
    • What have you been doing yesterday.
    • What are you planning today.
    • What your team members (not you!) should know about the task, how it'll affect them.
  • Don't bother with impediments if you're not going to address them.

This is a second evidence that your planning impairs your situation - if you have nothing specific to report, that means usually the task is too big and all you could say was: I was working on it.

  • Break tasks down into the bullet points.
  • Ensure tasks are small enough to take less than a day. Ideally, IMO, task should last ~3h and be equivalent of around 13 SP, so you can do 2 per day in most conditions.

Dealing with the team

Today the person who's always against me told me to stop saying ""They said this is what they committed to for this Sprint"" because, in his words, ""We never complete a Sprint. We just move in tasks and take in new ones in the next Sprint to fill up a quota. We do KanBan in reality. So stop saying that.""

He's right. You are wrong. You are doing bastardized SCRUM and/or variation on Kanban. Not their fault at all.

I understand why he says this, but he doesn't seem to realize that this is how it is because him and everybody else on the team don't care.

I don't think you understand at all. They might be caring less than they used to before, however blaming them not only will not improve anything, it might just make a situation worse. If it was rock bottom, they might actually start digging.

They just do work instead of dealing with impediments.

And here I thought doing work is what their job was all about. I wonder who was supposed to be dealing with impediments.... oh right. A Scrum Master. It's your job. They tell you what's wrong. You fix it. Not the other way around.

This is probably why you have so much problems in the Retrospective.

How can I make them see that joking around and circle jerking during these meetings costs the company a lot of money?

Stop the useless meetings and they'll instead joke around watercooler. Also see the paragraph about beatings improving morale. If they are using humor as a defense mechanism, you have some serious problems sir!

Get in on a joke - as in work with your team, not against it. (Who the fuuuuuuck cares about the company's money? Are you a shareholder now?)

To summarize

Your bad planning is making other parts of SCRUM fail, and everyone who participates miserable. They see that nothing changes, nothing is addressed, and their complaints not heard.

Improve your planning, and you'll improve the flow and morale.

Do your job removing impediments and your team will progress faster. Ask them what they feel you should do to help them.

Most importantly: Listen to your people. They already told you (and me) what's the problem.

Good Luck!

","280711","","293672","","2018-05-29 09:53:30","2018-05-29 09:53:30","","","","15","","","","CC BY-SA 4.0" "355736","2","","355686","2017-08-15 22:33:53","","5","","

Getting your team to close your sprint effectively (like at least close 80% of stories) sprint over sprint is in my opinion the single most important thing you can do. If your team is consistently missing, then that's a clear indication that you need to adjust your estimates.

The team should be receptive to this, though it can be very hard to get developers to be more realistic about estimates - I worked on a team that didn't close a sprint for a year (consistently closing less than 50% of the sprint), always under estimated and in every planning/retrospective I was a lone voice telling them your estimates suck, you're being foolishly over-confident, stop making excuses for what prevented you from making the estimate and instead adjust the estimate to reflect reality (perhaps more diplomatically than that but that was the basic sentiment) - When you're in that position, I would fully agree with the curmudgeon on your team who says the process is a waste of time because you are in fact doing KonBon, regardless of what you call it. At a certain point, his opinion becomes validated by the circumstances. It's hard to overcome that inertia but if you can't do that then I don't think the team will ever be very successful.

At some point you have to reset things, you have to get the team to drastically over compensate on their estimates just to get the system in motion. Once a team begins closing stories consistently, they should realize that the Agile process is mainly common sense and failing to materialize it in some fashion is harmful to your productivity. But so long as the 'commitment' and 'sprint goals' aren't taken seriously, which happens when they aren't achieved consistently, then the whole thing is a sham and becomes a drain on the teams productivity.

Getting people to buy in on a significantly different sprint (in terms of estimates, planning, the commitment ect) is difficult, on that team I eventually accomplished that due to two factors. One was just collecting the data from JIRA and saying ""there is no excuse for this, the numbers are way off, they're always off in one direction, we need to correct it, I don't want excuses in retro, I want the numbers to add up"" - and the other was pressure from higher up in management which I eventually explained to them like... ""The point of this process is to make our development timeline predictable. If we complete a constant amount of work every sprint that's fine, independent of that, our board needs to accurately reflect what we do get done. Management is more aware of our success relative to the commitment than it is to our actual output, for your own sake, make the projection line up with the output so it looks like you're getting your work done rather than spending half of every sprint doing nothing. The further removed people are from this time, the more they just see you failing, the excuses you make in retro aren't even something in their purview, they just see you failing.""

Eventually our team got traction and things started going a lot smoother and low and behold, we even started to have higher output once the process actually started working. So tl;dr - do whatever is necessary to start closing sprints with a relatively high degree of success. If you're not doing that the curmudgeon on your team will continue to have his resistance to Scrum validated by the results and ultimately will be right that your process is in fact just a sham and waste of everyones time.

","127144","","","","","2017-08-15 22:33:53","","","","0","","","","CC BY-SA 3.0" "355852","2","","355823","2017-08-17 15:55:13","","9","","

The general problem is a whole subarea of programming called data cleansing which is part of a larger subarea called data integration. Avoiding these sorts of issues is likely a large part of the reason for the migration from Excel sheets and why the senior dev doesn't want to allow a field to become nullable. I don't think it's unreasonable to say that this is one of the larger sources of complexity in data migrations.

Just choosing to use NULL whenever you could is likely very much the wrong thing to do, let alone changing the data model to make yet more fields nullable. Excel has weak or no integrity checking which is likely the cause of many of these issues. The wrong thing to do is to remove the integrity checking in the new database and dump garbage into it. This just perpetuates the problem and adds significant complexity to future integrations which have to somehow deal with nonsensical data.

Some of the difference is likely due to data model mismatch. Dealing with this is largely a matter of being (intimately) familiar with both data models and knowing how to map the old one to the new one. As long as the new one is capable of capturing the old one. (If not, your team likely has a very big problem.) This can easily require doing more work than just copying columns. Darkwing gives an excellent example of this (as well as why blindly inserting NULLs is the wrong thing to do). Elaborating upon it, if the old model had a ReceivedDate and an InProgress bit and the new model has a StartDate and ProcessingEndTime, you will need to decide if and how to set the ProcessingEndTime. Depending on how it's used, a reasonable (but arbitrary) choice might be to set it to be the same as the StartDate (or shortly afterwards if that would cause problems).

However, some of the difference is likely due to data that ""should"" be there that is missing or corrupted. (Most likely from data entry errors or poorly handled past migrations or bugs in data processing systems.) If no one on your team anticipated this, then you (collectively) have set yourselves up to spending 20% of the time of the project being ""almost"" done. (That was a made-up number, but it can be far worse than that, or better. It depends on how much data is incorrect, how important it is, how complex it is, how easy it is to get involvement from those responsible for the data, and other factors.) Once you've determined that the data is ""supposed to be"" there but is missing. Usually you'll attempt to determine the extent of the problem by querying the old data sources. If it's dozens or hundreds of entries, then it's probably data entry errors and the customers responsible for the data should manually resolve it (i.e. tell you what the values should be.) If it's millions of entries (or a significant fraction of the data), then you may need to reconsider whether you correctly identified that it ""should be"" there. This might indicate a modeling error in the new system. When you ask the people using the data about the missing data, they are often somewhat aware of it and have ad-hoc ways of dealing with it.

For example, imagine an invoice that had quantities and per item totals (but not unit price), except that some of the quantities were inexplicably missing. Talking to the person who processes such invoices might produce one (or more) of the following scenarios: 1) ""oh, a blank quantity means a quantity of 1"", 2) ""oh, I know those items go for around $1,000 so, clearly this is an order for 2"", 3) ""when that happens, I look up the price in this other system and divide and round"", 4) ""I look it up in another system"", 5) ""that's not real data"", 6) ""never seen that before"".

As suggested, this can indicate some ways of automatically resolving the situation, but you have to be careful that the solution applies to all cases. It is common for other systems to be involved that can cross-check the data, and this is a good thing. However, it's often a bad thing insofar as it can be difficult to gain access to and integrate with these systems to perform the cross-checking, and it often comes to light that the systems conflict with each other not just by one missing some data. Some manual intervention is often required, and depending on the scale, may well require tooling and interfaces to be created specifically for the data cleansing task. Often what is done is the data is partially imported but rows with missing data are sent to a separate table where they can be reviewed. Often this will need to be done at an appropriate granularity for consistency in the new system (i.e. reject invoices not individual line items even if most of the line items are fine in a particular invoice) and it can lead to cascades (if I can't import a client, then I can't import any invoices for that client).

","211449","","","","","2017-08-17 15:55:13","","","","1","","","","CC BY-SA 3.0" "355883","2","","355686","2017-08-18 10:36:30","","4","","

As a Scrum Master you coach and guide the team to become more productive. The Scrum framework is a powerful tool to get there, but the Scrum framework absolutely must not ever become the goal by itself - otherwise you're doing Cargo Cult.

It seems you've been doing Cargo Cult for 3 years now and people realized that's a horrible project management methodology. The good news is you've got smart people, they got it right. Unfortunately, some people in your company are calling it Scrum, but again you've got smart people, they even told you what the team's doing isn't called Scrum.

Planning is just a waste of time, because we just move overflow into the new Sprint and don't complete the work anyways, so why bother.

Exactly. Why bother? You need to find an answer, or rather you need to change the planning and the sprint itself so there is a point to planning. Either that, or stop wasting everybody's time with a pointless Sprint Planning. You may want to ask the company to send you on a Scrum Master training, because running a great Sprint Planning is not trivial.

During Retrospective [...] the others are silent and I have to deal with this every time.

If you're dealing with the same issue every Retrospective, an people don't even bother (anymore?) to speak up during the Retrospective, that's just a waste of time. Unless you or the team can somehow address the issues raised in the Retrospective, the meeting is just a demoralizing the team. Issues raised in the Retrospective must be addressed, and there should be progress by the next Retrospective.

Daily Scrum is again just a waste of time for them because none of them bothers to talk and plan the day. They just state ""I worked on task X yesterday and will work on that again today.""

Indeed, why bother wasting everybody's time if they just work on the same tasks multiple days? They are absolutely correct, that Daily Standup is indeed pointless. The Standup facilitates close collaboration on many tasks which can each be completed in half a day or less. If your tasks can't be broken down that way (due to broken Sprint Planning, or because your tasks actually don't fit well with Scrum), there's not much of a point to holding the 5-minute Daily Standup meeting (it is no longer than 5 minutes, right?).

""We never complete a Sprint. We just move in tasks and take in new ones in the next Sprint to fill up a quota. We do KanBan in reality. So stop saying that.""

I understand why he says this, but he doesn't seem to realize that this is how it is because him and everybody else on the team don't care.

No, you absolutely do not understand why he says this. You got the root cause of the impediment - which you need to resolve - wrong. They don't care because the team's project management practices suck. Caring about doing Cargo Cult and doing Cargo Cult harder doesn't stop it from being Cargo Cult, one of the worst project management methodologies in existence (in your defense: also the most widely used).


What can you do about this?

  1. Listen to their concerns. Again, you're blessed in that you've got smart people.

  2. Help them resolve the impediments.

And that's it, really. You'll need to experiment with how to change Sprint Planning, Daily Scrum and Retrospectives to make them valuable to the team - even if you wanted to drop the Scrum label, you still have these 3 meetings with similar frequency and similar purpose in every other project management methodology out there. As for how you're going to experiment (frequency, content, who hosts the meeting, time, duration, etc), that's surprisingly easy: Ask the team. Don't force your ideas on them, if anything you should let them to force their ideas on you (assuming they can agree on some).

If you're afraid you'll lose control, set some boundaries beforehand, e.g:

  • The labels of the meetings stay the same.
  • The meetings still take place and frequency of the meetings cannot change by more than a factor of 2.
  • You're currently experimenting, so any change only lasts for 2 sprints, after which you revert to the old pattern (best give the time in weeks to avoid ambiguity when they want to double the sprint duration).
","169135","","169135","","2017-08-18 10:56:20","2017-08-18 10:56:20","","","","0","","","","CC BY-SA 3.0" "240779","2","","240774","2014-05-21 23:09:41","","4","","

This is a big question, but the algorithms fall broadly under the category of signal processing.

In short, there are a couple of things that make a voice stand out (or any other sound, for that matter, though I will call every sound a voice for simplicity's sake). They are pitch, timbre, and loudness.

Pitch is probably the most familiar, but specifically it refers to the frequencies occupied by a voice. Most people voices have a uniform pitch, which is to say I've never heard of a single person singing a chord.

Timbre is what makes the difference between a saxaphone and a violin playing the same note. It's like the flavor of the sound, and can be affected by the acoustics of the room, the noise (is it breathy or raspy), resonance, etc.

Loudness is like the average pressure of the air's movement. It's surprisingly complicated, but for my explanation a volume knob works.

Okay? Now, to isolate a sound, we can try and trace these three factors. It's pretty rare for a voice to change pitch, timbre, and loudness simultaneously. So we'll call a dramatic change in two of these a different voice.

A fourier transform of your audio signal can give you a ""frequency-loudness over time"" view of it. So, if you look at a fourier transform (the graph at the bottom of the following image)

you can begin to get an idea of how this can be carried out. Once you know where a voice lies along the spectrum in time, you can apply a moving band pass filter to it to isolate that signal.

Still, this is something that has taken a long time for people to get right. Siri is just the latest in a long line of voice recognition applications of varying ability.

","74273","","","","","2014-05-21 23:09:41","","","","0","","","","CC BY-SA 3.0" "136776","1","136793","","2012-02-24 14:48:04","","10","1938","

We're in a bad situation of having very little documentation on customization our past workers made to a business critical system. Lots of changes were done to Crystal Reports, database entities, and proprietary configuration/programming files for our ERP software.

The current documentation generally reads something like this:

This program is run before invoicing. Known bugs: none.

Run this program after installing software X.

Changed the following fields in this report: (with no explanation of how or why)

Our IT shop is small, and in the case of the ERP software, most work was lumped on one person (that's me now) so no one else here knows what all we did. The IT and accounting department know bits and pieces (occasionally quite helpful ones) but it's not enough.

Another problem is our Accounting department seems to think we're well documented. It's true that we kept lots of records of what went wrong, but very little explains what (if anything) was done to fix these problems. We have hundreds of papers explaining bugs, but the documents explaining changes (as shown above) are almost useless.

How can I go about documenting past changes when I don't know what all was done? I can start with documenting what we've changed: Files, database tables ect which we need to have for the system to work. I can also document what we do; when reports are run, why people were told to use X report/program. But when one of these customized things has a problem, I'm always back to square one.

How can I proactively document this stuff for myself and others?

","35414","","","","","2012-02-24 17:26:05","How can I document someone else's past work?","","7","0","","2016-01-10 04:53:05","","CC BY-SA 3.0" "241039","2","","144326","2014-05-24 19:11:52","","17","","

try-catch in javascript is just as valid and useful as in any other language that implements them. There is one major reason its not used as much in javascript as in other languages. Its the same reason javascript is seen as an ugly scripting language, its the same reason as why people think javascript programmers aren't real programmers:

  • Javascript is an incredibly accessible and pervasive language

The mere fact that so many people are exposed to javascript (by virtue of the only supported language by browsers) means that you have lots of unprofessional code out there. Of course there are also many minor reasons:

  • some things in javascript are asynchronous and thus aren't catchable (asynchronous stuff)
  • there has been much overblown talk about how try-catch has a huge performance hit. It has a bit of a performance hit, but for most code, it is well worth it.
  • javascript was (unfortunately) implemented in a way that often silently ignores errors (automatically changing strings to numbers for example)

Regardless, try-catch should be used, but of course you should learn how to use them properly - like everything else in programming.

","132441","","-1","","2017-05-23 11:33:36","2014-05-24 19:11:52","","","","3","","","","CC BY-SA 3.0" "137233","2","","137193","2012-02-28 02:50:50","","15","","

I don't know if this is your team's issue but it definitely was for us when we first introduced scrum. Our management came to us one day and said, from now on you will not be working in individual silos. Instead, you will be working as a scrum. Here's a bunch of new processes you must all follow and follow them you will.

The key is that they never came to us, the developers, and asked, how do you guys want to work? what will make you happier? more efficient?. So what I heard was, ""you no longer own any code. Anything you write, will get trampled on (you know, team ownership). You will not move or lift a finger because we will now manage your time by the hour"". Oh and now you have a boring 15-min stand up everyday where people will discuss things you don't care about and it will usually take 30 minutes and then every two weeks will have an uber boring 4 hour planning meeting that is sure to suck all life out of you.

In reality this is not Agile or Scrum, this is just moving from one style of management to a different style, where everything is still centrally controlled, and not only did this suck all life out of me, but it also gave me lots of free time to update my resume.

In the last twelve months, after I lobbied numerous times for our team manager to try something different, he actually took me up on my suggestions, and I think we've had a very successful year.

I believe the key change for us was to give developers much more voice and freedom in choosing how we want to work. Few things we did:

  1. Break large ""agile"" development team into 3 small ones so that each one only has 3-4 developers. This makes everyone engaged and individuals are not drowned out.
  2. Make sure everyone in the same team works around the same functional area so that people care what others are talking about in stand ups and iteration plannings.
  3. Instead of management simply picking who works on what and assigning stories/tasks, we came up with a backlog and the team itself had a lot of say in how the work is divided.
  4. Because we had many new members, we've started with somewhat of a silo system where each person owns a primary area of responsibility. This allowed new people to focus on smaller area of an unknown product and also get a faster sense that they are not playing in someone else's sandbox. But 6-8 months into the program, those areas started to morph as the boundaries became more gray. Now the guys, on the teams I'm on, are fairly comfortable stepping into other's code or having other developers work in theirs.
  5. Code reviews of all submissions were key (and this was the first thing that was skimped on when we first did Scrum):
    • Knowledge transfer in terms of programming techniques/methods
    • Was great for others to learn code they wouldn't have seen otherwise
    • Your team gets a chance to communicate and socialize which improves team dynamics
    • And I guess, code reviews will catch a bug or two, but I see their value mostly in the above aspects.
  6. Management has to listen to the team. If the team says something doesn't work or needs to be changed, and they simply ignore that, than the team members will simply check out and let the management deal with the project. If you want people motivated, they need to be vested and they will only be vested if they are doing what they believe is right, not what they are told to do from up top.
","20673","","","","","2012-02-28 02:50:50","","","","0","","","","CC BY-SA 3.0" "45785","2","","45627","2011-02-09 09:42:38","","3","","

Refactoring without unit tests is really, really risky. If you have a very solid set of system tests, you might get away with it, but still, chances are, you will introduce new bugs. So one possible workaround is simply to write unit tests without telling your boss. If he is not micromanaging, he won't know what you are spending your time on, and you can just silently include the time to write unit tests into your estimates for refactoring tasks.

Regarding how to reimplement an existing codebase, see also this older answer of mine.

You added to your post:

While I agree I will need to write Unit Tests for this, I don't believe I will capture everything with them. I'm looking for ways to easily be able to revert to the old, functional code should anything happen. While I know this is a poor practice, I'm planning on removing this code after our team can guarantee that the new code works to the same standards as the old.

How can your team ever guarantee that? As you yourself correctly note, you won't realistically be able to capture all bugs with testing. Even more fundamentally, you can't prove by testing that a nontrivial piece of code is bug-free.

As I suggested in my earlier answer, IMHO the best you can achieve is to write a thorough set of tests against a common interface/facade, and develop the new implementation so that it satisfies all existing tests from the beginning.

Note also that there is a psychological barrier here: if you (and your users) don't trust the new implementation, you may keep deferring its deployment till the end of time. At some point, you have to make the leap of faith by declaring that it is good enough to replace the old one, releasing it officially, and retiring the old version, fixing any issues in the new implementation as they appear.

","14221","","-1","","2017-04-12 07:31:33","2011-02-09 12:29:37","","","","2","","","","CC BY-SA 2.5" "46035","2","","46029","2011-02-09 20:32:50","","11","","

It could shut your business down.

If you do not store the card details, then identify how there could be a leak - quickly.

[I believe they are allowed to fine you upto $100,000 per incident if you fail to notify within 24 hours of discovery.]

If you find a leak, then do not notify customers, notify the credit card companies, they require you to do this and failure to do so can land you in very hot water in deed. Expect to be required to be audited at a minimum, you could be heavily fined as a company.

","7990","","7990","","2011-02-09 20:41:28","2011-02-09 20:41:28","","","","0","","","","CC BY-SA 2.5" "138369","2","","138359","2012-03-05 13:29:16","","5","","

The first thing that I'd recommend is having your team get involved. If you truly are being productive and contributing positively to the team, the people you are working with should be more than willing to say this to the appropriate people at the right time. They would be best suited to speak to your strengths, weaknesses, and measure your contributions in terms of its value to the project. This is why, in all of my experiences, colleagues and teammates provide assessments and evaluations of each other to management (or, at university, the professor) for consideration - these are the people who know you best.

If you need more concrete data, you should have plenty of it, since you are using email and a project management tool. Hopefully, you should also be using a version control system. You can show how long it takes you to respond to emails (and IMs if you are logging IMs) related to the project. You can show check-ins and merges from version control, with diffs to show what you have contributed. You can show the tasks that you have completed (and depending on how your PM tool is set up, time tracking).

Another avenue would be to discuss the lab environment with the people who manage/operate it. If it's truly supposed to be a work environment and people are distracting you from working, it's not meeting its purpose. This might be something that needs to be addressed to ensure that people have a productive environment. You can't expect a silent team space, since teams will have conversations. But I don't think it's too much to ask to have a space that's dedicated to work-related discussions at a reasonable volume.


Given your more recent edit, I don't think there's much you can do.

At my job, I spend about 8 to 10 hours/week in a team environment. Yes, we need to coordinate activities and resource utilization and plan tasks and so on. But the other 32 (and usually more like 35+) hours/week are me working on my assignments solo. However, I am available (as in have email notifications on, IM client on, at or near my desk to answer my phone or check for messages) far more frequently.

In order to maximize team communication and productivity, many companies have the notion of ""core hours"". These are hours that everyone on the team not on vacation or some other scheduled/approved leave is expected to be available. This applies equally to people with virtual presence as well as actually in the office. Where I work now, core hours are 6 hours a day (I think 5 hours on Friday), and different teams make slight modifications to core hours.

The ""99%"" asked for by management is probably a little extreme. However, being readily available 75% of the time, and answering emails, calls, and IMs during occassions when you aren't with the group physically, is not out of the question at all. Even if you don't think these are urgent, your lack of a response could be blocking someone else.

","4","","4","","2012-03-05 22:05:00","2012-03-05 22:05:00","","","","6","","","","CC BY-SA 3.0" "138416","2","","138396","2012-03-05 18:28:20","","8","","
  • I always make sure the developer wants my help, and I take great care not to go deeper into explanations than their patience can tolerate. Like everybody, I love the sound of my own voice!
  • I treat them as equals, and make sure to ask their opinion as often as I sound off.
  • Catch them doing something right and let them know.
  • I always learn something when I do this right -- about my craft, about my profession, about the developer, and about teaching.
  • The first lesson always is: when to know you've been trying it on your own too long. Many people take pride in finding their own answers, and burn valuable time going in circles.
","49137","","49137","","2012-03-06 16:18:34","2012-03-06 16:18:34","","","","3","","","2012-03-06 20:18:58","CC BY-SA 3.0" "138459","2","","138359","2012-03-06 01:06:41","","6","","

This really sounds like a code monkey farm, professors want you to adapt to the ""war room"" environment, (aka open space environment) that so many companies nowadays, think is optimal.

By the time you'll leave college the optimal formula will be different, again.
(maybe they'll settle for some form of remote work, finally)

By the time I enrolled, everybody though cubicles where the best of the best.

Then, as college kids joined the workforce, ""the industry"" changed his mind, college kids kind of had developed a taste for computer labs where source control was done by screaming ""I'm editing this file now!"" and similar antics.

Real version control used to suck, (i.e. CVS, a little SVN) there was no facebook to distract, and no smartphones to circumvent LAN restrictions, so... it kinda worked.

""War rooms"" where modeled after those college kids.

Those rooms didn't age up very well, though.
""War rooms"" today end up being of two kinds:

  • noise farms, where everybody is doing whatever he likes, (and mufflers, ear plugs or headphones are the only way around it)

  • awkward silent panopticons, where nobody is doing anything of his liking, everybody checks on everybody, and everybody feels more and more miserable, every minute that passes.

So, instead of telling you how to work, they should observe how you get the work done and then write papers about it, so that ""the industry"" may learn from it.

But then again, my guess is you're in a code monkey farm, they are building computer scientists for ""the industry"", the way ""the industry"" wants them, and you're gonna run into trouble if you stand too much for your (sane) work habits.

Cope with what they're requesting, drag around the weight of your unproductive team members (poor souls, they would probably perform better in a more focused environment) and let the overall code quality degrade: you're in that class to prove that teamwork, done in the way your professor is evangelizing, doesn't work very well.

","30396","","30396","","2012-03-06 12:40:59","2012-03-06 12:40:59","","","","0","","","","CC BY-SA 3.0" "138678","2","","138643","2012-03-07 17:12:54","","109","","

I'll expand on my comment.

I think there are a few factors that influenced the use of Python in scientific computing, though I don't think there are any definitive historical points where you could say, "Yes, that is the reason why Python is used over Ruby/anything else"

Early History

Python and Ruby are of roughly the same age - according to Wikipedia, Python was officially first released in 1991, and Ruby in 1995.

However, Python came to prominence earlier than Ruby did, as Google was already using Python and looking for Python developers at the turn of the millenium. Since it's not like we have a curated history of uses of programming languages and their influences on people who use them, I will theorize that this early adoption of Python by Google was a big motivator for people looking to expand beyond just using Matlab, C++, Fortran, Stata, Mathematica, etc.

Namely, I mean that Google was using Python in a system where they had thousands of machines (think parallelization and scale) and constantly processing many millions of data points (again, scale).

Event Confluence

Scientific computing used to be done on specialty machines like SGIs and Crays (remember them?), and of course FORTRAN was (and still is) widely used due to its relative simplicity and because it could be optimized more easily.

In the last decade or so, commodity hardware (meaning stuff you or I can afford without being millionaires) have taken over in the scientific and massive computing realm. Look at the current top 500 rankings - many of the top ranked 'super computers' in the world are built with normal Intel/AMD hardware.

Python came in at a good time since, again, Google was promoting Python, and Google was using commodity hardware, and they had thousands of machines.

Plus if you dig into some old scientific computing articles, they started to spring up around the 2000-era.

Earlier Support

Here's an article written for the Astronomical Data Analysis Software and Systems, written in 2000, suggesting Python as a language for scientific computing.

The article has this quote about Python:

Python is an interpreted object-oriented programming language that is starting to receive considerable attention in scientific applications (Python, 1999). This is because Python, and scripting languages in general, represent a next logical step for many scientific projects (Dubois 1994). First, Python provides an interpreted programming language that can be viewed as an extension of the simple command languages already used by scientific programs

Second, Python is easily integrated with software written in other languages. As a result, it can serve as both a control language for driving existing programs as well as a glue language for combining different systems together. Finally, Python provides a large collection of third party modules, an established user base, and a variety of documentation in the form of books and online references. For this reason, one might view it as a highly polished and expanded version of what scientists often try to accomplish when writing their own command interpreters.

So you can see that Python had already had traction dating back to the late 90s, due to it being functionally similar to the existing systems at the time, and because it was easy to integrate Python with things like C and the existing programs. Based on the contents of the article, Python was already in scientific use dating back to the 1995-1996 timeframe.

Difference in Popularity Growth

Ruby's popularity exploded alongside the rise of Ruby On Rails, which first came out in 2004. I was in college when I first really heard the buzz about Ruby, and that was around 2005-2006. django for Python was released around the same time frame (July 2005 according to Wiki), but the focus of the Ruby community seemed very heavily centered on promoting its usage in web applications.

Python, on the other hand, already had libraries that fit scientific computing:

  • NumPy - NumPy officially started in 2005, but the two libraries it was built on were released earlier: Numeric (1995), and Numarray (2001?)

  • BioPython - biological computing library for python, dates back to 2001, at least

  • SAGE - Math package with first public release in early 2005

And many more, though I don't know many of their time lines (aside from just browsing their download sites), but Python also has SciPy (built on NumPy, released in 2006), had bindings with R (the statistics language) in the early 2000s, got MatPlotLib, and also got a really powerful shell environment in ipython.

ipython was first released in the early 2000s, and has had many features added to it that make it very nice for scientific computing, like integrated matplotlib graphing and being able to manage computational clusters.

From above article:

It is also worth noting a number other Python related scientific computing projects. The numeric Python extension adds fast array and matrix manipulation to Python (Dubois 1996), MMTK is Python-based toolkit for molecular modeling (Hinsen 1999), the Biopython project is developing Python-based tools for life-science research (Biopython 1999), and the Visualization Toolkit (VTK) is an advanced visualization package with Python bindings (VTK, 1999). In addition, ongoing projects in the Python community are developing extensions for image processing and plotting. Finally, work presented in (Greenfield, 2000) describes the use of Python in projects at the STScI.

Good list of scientific and numeric packages for Python.


So a lot of it is probably due to the early history, and the relative obscurity of Ruby until the 2000s, whereas Python had gained traction thanks to Google's evangelism.

So if you were evaluating scripting languages in the period from 1995 - 2000, what were you really looking at? There was Perl, which was probably different enough syntactically that people didn't want to use it, and then there was Python, which had a clearer syntax and better readability.

And yes, there is probably a lot of self-reinforcement - Python already has all these great, useful libraries for scientific computing, while Ruby has a minority voice advocating its use in science, and there are some libraries sprouting up, like SciRuby, but Python's tools have matured over the last decade.

Ruby's community at large seems to be much more heavily interested in furthering Ruby as a web language, as that's what really made it well known, whereas Python started off on a different path, and later on became widely used as a web language.

","5594","","-1","","2020-06-16 10:01:49","2012-03-07 20:25:36","","","","4","","","","CC BY-SA 3.0" "244606","2","","244593","2014-06-11 03:14:00","","168","","

I would never participate in a code test of this nature. I have taken many code tests and done many code projects. I certainly wouldn't check code into someone else's repository under any circumstance. If they don't know what they need to know after a 4 hour sample with some minor bug correction in a pair-programming session, then they won't ever know.

Going into a test, you should know and make clear a few things up front:

  1. It should be agreed upon and known that any work produced during the test may not be used for any purpose other than determining your skill at the required tasks.
  2. A code test should not last more than 4 hours.
  3. You are not an employee of the company, so any suggestion that you might be paid for code produced is preposterous. Insist on a written contract of payment if there is even a hint of this.
  4. Set specific limits on the time you will spend on any given part of the test, and then stick to those limits. If you find yourself going over the limits for any reason, consider why you are going over that limit. Is it because of pressure from them? Is it because you've made mistakes? Is it because you've poorly estimated how long something should take to complete?
  5. Stand your ground if you feel you have covered a particular topic. If you've already fixed a bug, and they're asking you to fix a nearly identical bug, say ""We've already covered that topic with bug x, perhaps we could move to something else that demonstrates something new.""
  6. Under no circumstances should you check anything into a production pipeline. This includes into any kind of development branch that may ultimately lead to a production pipeline. When in doubt, check nothing in. For code tests that are not necessarily in person, I insist that the code be checked into my personal public repository first. This gives me at least some kind of protection from having my work used inappropriately.
  7. Judge them for their behavior every bit as much as they are judging you. If you feel they are not being up front with you, call them on it. If you feel you are being mistreated, speak up.

The company you are interviewing with is also being interviewed by you. If this is how they are treating someone they are interviewing, is this a company you want to work for? I understand that often people have a need for a job and often this need will override some common sense concepts, but this should always be in the forefront of your mind. Don't be afraid to walk out. If it doesn't feel right, follow your instincts and vote with your feet.

","16747","","","","","2014-06-11 03:14:00","","","","12","","","","CC BY-SA 3.0" "47860","1","47866","","2011-02-14 22:14:17","","6","1274","

Honestly, I hate the word ""Pythonic"" -- it's used as a simple synonym of ""good"" in many circles, and I think that's pretentious. Those who use it are silently saying that good code cannot be written in a language other than Python. Not saying Python is a bad language, but it's certainly not the ""end all be all language to solve ALL of everyone's problems forever!"" (Because that language does not exist). What it seems like people who use this word really mean is ""idiomatic"" rather than ""Pythonic"" -- and of course the word ""idiomatic"" already exists. Therefore I wonder: Why does the word ""Pythonic"" exist?

","886","","","","","2011-02-14 22:31:59","Why does the word ""Pythonic"" exist?","","2","6","1","2011-11-15 19:57:51","","CC BY-SA 2.5" "140073","1","140075","","2012-03-16 15:01:38","","13","1010","

First allow me to coin a term:

code goal-tending: Checking out code in the morning, then silently reviewing all of the changes made by the other developers the previous day file by file, (especially code files you originally developed), and fixing formatting, logic, renaming variables, refactoring long methods, etc., and then committing the changes to the VCS.

This practice tends to have a few pros and cons that I've identified:

  • Pro: Code quality/readability/consistency is often maintained
  • Pro: Some bugs are fixed due to the other developer not being too familiar with the original code.
  • Con: Is often a waste of time of the goal-tending developer.
  • Con: Occasionally introduces bugs which causes hair-pulling rage by developers who thought they wrote bug-free code the prior day.
  • Con: Other developers get aggravated with excessive nitpicking and begin to dislike contributing to the goal-tender's code.

Disclaimer: To be fair, I'm not actually a development manager, I'm the developer who is actually doing the ""goal tending"".

In my defense, I think I'm doing this for good reason (to keep our extremely large code base a well oiled machine), but I'm very concerned that it's also creating a negative atmosphere. I am also definitely concerned that my manager will need to address the issue.

So, if you were the manager, how would you address this problem?

UPDATE: I don't mean for this to be too localized, but some have asked, so perhaps some background will be illuminating. I was assigned a giant project (200K LoC) three years ago, and only recently (1 yr ago) were additional developers added to the project, some of which are unfamiliar with the architecture, others who are still learning the language (C#). I generally do have to answer for the overall stability of the product, and I'm particularly nervous when changes are surprisingly made to the core architectural parts of the code base. This habit came about because at first I was optimistic about other developer's contributions, but they made way too many mistakes that caused serious problems that would not be discovered until weeks later, where the finger would be pointed at me for writing unstable code. Often these ""surprises"" are committed by an eager manager or a co-worker who is still in the learning phase. And, this probably will probably lead to the answer: we have no code review policy in place, at all.

","39690","","39690","","2012-03-16 16:29:48","2012-03-16 18:44:13","How should code ""Goal Tending"" be handled by a Development Manager?","","4","10","4","","","CC BY-SA 3.0" "140444","2","","140423","2012-03-19 12:20:42","","20","","

My key is

Variety


Repetition. Once can be fleeting. Seeing the 100th occurrence makes a difference!

Memory by fingers. I remember code much better when I've actually typed it a few times.

Code Library - Keep a personal stash of code and tricks you have used and seen.

Centralization. I keep 1 file with all my usernames (hundreds) on 1 pc. I apply security to it.

Discipline - you mentioned not have time / making the effort to update your own blog, etc. That you just have to work harder on and make sure you do it.

Acceptance - skills and techniques and things you learned lass week will be fresh. Some of the items you say 3 years ago will be hard to remember. That's normal as the brain makes room for more.

Multiple senses - sometimes I use mnemonics, sometimes I drop a picture with key concepts drawn in distinctive ways. I read, I listen to podcasts, I watch video's, I use color in editors. The more sense I use the better.

Mnemonics, e.g. css border order Tarball (TaRBalL) TopRightBottomLeft. I also use colors and shapes to remember words and themes. Often the more bizarre, the more memorable!

Continued Use- This is the 'use it or lose it' effect. All knowledge fades over time. Time++ Fade++

The Stack Exchange Network - I'm using Stack Overflow in multiple areas to try and keep as many different skills and techniques 'current' and 'remembered' even if I'm not using them in my current job/project.

Dropbox - I keep commnon small files with memory related items

Books - I still like the fool and feel of physical books. I also have multiple kindles and other on-line technical books that I can refer to anywhere. Obviously my technical library can be accessible anywhere when it is digital which is huge.

The Google effect - no list of items would really be complete without mentioning this. This is more about what you don't need to remember - because you can google it and find it. This is an important consideration too. As more people become more adept at this way of getting knowledge the need to actually memorize any given fact is falling. However this is also 'raising the bar' for knowledge workers who are finding more and more that a deep conceptual understanding is required to perform in the current environment. Of course which out for CME's!

My own Blog

My own bookmarks site.

  • How do I keep my blog and my bookmarks updated? Well at the end of the day I think it is discipline and niftyness, i.e. yes, there is a certain amount of dedication required for it. However if you went to school for a degree and paid $100,000 (or even $10,000) or you are self-taught, you know the meaning of dedication and persistence. This is no different. The niftyness, or 'nifty factor' is that when you see a cool web site with a cool tutorial or technique or whatever, or you overcome a tough thorny problem, you go ""hey that's nifty!"" - so when you feel this (or whatever catchphrase you use), now associate that with ""I must blog that or record that bookmark"". There's a good chance you're not at a pc, updating your blog at that very moment, so send yourself an email, or a text or even a voicemail, or a new task in your task list - whatever works for you - to remind yourself to do it! For instance my android phone has a tasks app that is useful for this.
","34069","","34069","","2012-04-17 02:33:39","2012-04-17 02:33:39","","","","3","","","","CC BY-SA 3.0" "49477","2","","49232","2011-02-18 20:27:58","","5","","

Excellent question. I fear that a good answer may prove very difficult.

But as a start, it’s quite easy to generate “true” randomness when two people are involved: simply let one of the people count silently in their head modulus some number, and the other say “stop” after an arbitrary interval. Afterwards, this number can be transformed into other distributions using standard methods.

To make this method robust, the modulus mustn’t be too large, otherwise there will be a strong bias against small numbers. I’d really be interested to see if there exists any work analyzing the stochastic properties of this method.

","2366","","","","","2011-02-18 20:27:58","","","","2","","","","CC BY-SA 2.5" "141068","1","141274","","2012-03-22 21:50:18","","16","1119","

Edit: Justin Cave made a good point that this sort of communication should be up front in my quoting / estimations. Is this case, I'm still interested to know what sort of language people use to describe the 'existing code learning' activities. Especially to a company who haven't dealt with software contractors before. End edit

I have a contract to upgrade some in-house software for a large company. The company has requested multiple feature additions and a few bug fixes. This is my first freelance style job.

First, I needed to become familiar with how the application worked - I learnt it as if I was a user.

Next, I had to learn how the software worked. I started with broad concepts, and then narrowed down into necessary detail before working on each bug fix and feature.

At least at the start of the project, it took me a lot longer to learn the existing code than it did to write the additional features.

How can I describe the process of learning the existing code on the invoice? (This part of the company usually does things in-house, so doesn't have much experience dealing with software contractors like me, and I fear they may not understand the overhead of learning someone else's code). I don't want to just tack the learning time onto the actual feature upgrade, because in some cases this would make a 'simple task' look like it took me way too long. I want break the invoice into relevant steps, and communicate that I'm charging for the large overhead of learning someone else's code before being able to add my own to it.

Is there a standard way of describing this sort of activity when billing for a job?

","38286","","13597","","2012-03-27 21:19:28","2012-03-27 21:19:28","How should I describe the process of learning someone else's code? (In an invoicing situation.)","","7","2","2","2015-05-18 04:52:59","","CC BY-SA 3.0" "359533","2","","359530","2017-10-22 06:18:14","","7","","

Frankly, glue classes that merely pass through fields from the data model don't really add significant value to your application, nor are they particularly interesting from a functional perspective.

So before we get too deeply into your individual questions, let me propose a simple foundational architecture:

DB <---> ORM <---> SL/BLL <---> VM <---> V

The DB is your database. The ORM is your Object-Relational Mapper; it communicates with the database via SQL, and exposes CRUD methods. The SL/BLL is your Service Layer/Business Logic Layer; it converts business operations such as CreateInvoice into CRUD methods for consumption by the ORM. The VM is the ViewModel; it coordinates UI interaction with the SL/BLL. The V is the View; it contains surface-level UI interaction and validation code-behind.

Your sample code is suggestive of C#. In C#, your ORM (very likely Entity Framework) can produce code-generated entity classes that can be glued to domain logic using the partial keyword, allowing you to write custom domain and validation logic for each entity class without the pain of creating all that boilerplate for DTO's.

A lean architecture like this one should allow you to develop an easily maintainable application with the minimum necessary boilerplate code. Unless you're building an elaborate, crystal-cathedral architecture in Java for a large development team, this representative architecture should be all of the abstraction you will ever need.

","1204","","","","","2017-10-22 06:18:14","","","","5","","","","CC BY-SA 3.0" "359669","2","","359668","2017-10-24 17:42:43","","8","","

But my team is working using agile methods (a combination of scrum and kanban), so what we need is user stories.

This is a misconception. Neither Scrum nor Kanban require that requirements be specified in user stories. Both are silent on the issue.

The Scrum Guide refers to ""Product Backlog Item"". These items have only a few attributes - description, order, estimate, value. There's nothing about the format or style required by Scrum, although you do often find the User Story format used. Kanban has even fewer requirements on items than this.

Instead of trying to convert your requirements specification into User Stories, make sure that your requirements meet the characteristics of a good requirement - cohesive, complete, consistent, atomic, traceable, current, unambiguous, have an importance specified, and are verifiable. Then, identify any technical dependencies between requirements and ensure they are prioritized appropriately.

If you're embracing Agile, you'll recognize that your requirements specification is not ""finished"" - these requirements may change. Instead, focus on the principles of agile software development. Iterate quickly - implement slices of the functionality specified in the requirements specification and get it in front of people who can evaluate the software and provide feedback to incorporate into future iterations. Have your development team work closely with subject matter experts, product managers, and stakeholder representatives to understand the users and their needs. Focus on the higher priority requirements first - you may not actually need to implement everything to be acceptable by the users and can maximize work that isn't done. Reflect and improve on how the team works.

","4","","1204","","2017-10-24 18:35:12","2017-10-24 18:35:12","","","","14","","","","CC BY-SA 3.0" "359734","1","359738","","2017-10-25 16:06:20","","5","2661","

The insurance company for which I work has had an ongoing software development project for the last several years, which has been split between multiple lines.

  1. software package:
    The suite we are using is broken up between several major packages, they are independent, but tied to each other, each with it's own domain: one for billing, one for claims, one for policies, and one for contacts each package is managed separately and is broken down further by line of business.
  2. Line Of Business:
    We essentially have 3 lines of business, and we have been rolling out each line of business individually. The teams are broken down one step further; they are broken down by type of work.
  3. Type Of Work:
    We have split up the type of work based on integration between packages, configuration of an individual package, and document generation.

    These splits aren't universal, for example on the first line of business, which we have already rolled out, we never had an integration team to my knowledge (this project was started prior to my involvement to the team).

    The issue that we have ran into, is that we have rolled out 2 of the 3 lines of business and after we did this, we merged the scrum teams from those lines of business together for one of the software packages. This means that where we had 4 individual scrum teams, we now have 1. Obviously, this was a mistake; and now our scrum master is overwhelmed, scrum meetings are only relevant to a subset of those in attendance, and take too long leading to people not paying attention.

    So, now moving forward, there has been some push back to break this MEGA-TEAM back down into it's previous component parts, at least partially owing to the fact that much of the granularization that was done prior to taking line of business 2 live is finished, and the line between configuration and integration is much messier. We have been discussing how to proceed, but there has been a deafening silence on the part of most people when it is brought up.

    To give one an idea of the scope of the problem, aforementioned MEGA-TEAM is about 35 people, and if we split it back the way we had it, several key people would, by necessity, end up on multiple teams. Furthermore if we split it back the way it was, our teams would still be bigger than an ideal team size, but the more we split them, the more personnel sharing would be necessary. Which could eventually end up with all of our time being spent in meetings.

    How do we proceed? How do we decide where to draw the lines, and how do we address the need to have some people on multiple teams seemingly regardless of how we split up the teams?

","202858","","","","","2020-06-12 09:49:08","Scrum Team has grown too large, how should it be split","","2","0","1","","","CC BY-SA 3.0" "51280","2","","50831","2011-02-24 07:31:42","","22","","

I am a 21 year old undergraduate from India in my final years of Computer Science and Engineering 4 year degree course.

The very idea of writing this was to say that India is much more than an outsourcing hub. I hope the west sees it that way and instead of absorbing talent, the west should set up more hubs in India. There is some offensive content ahead, but if you understand the larger picture, you will understand what I am trying to say.

Education in India is in a very disturbing state with a workforce produced every year that has no or absolutely horrendous technical skills. The educational system is not at all competitive in terms of innovation or entrepreneurship. this has led our country to huge embarrassments like the recent indigenously developed $10 computer (which turned out to be a cheap Chinese Android based tablet, only maintained by an Indian company), or an earlier claim of another technological breakthrough (which turned out to be a thumb-drive). Education institutes are totally disconnected from the real world of technology and are more interested in students reinventing the wheel, all in the name of innovation. educational institutes, everyone hates them.

Coming to places where you at least expect to learn some hot development skills:

I have had exposure to a few training facilities in India apart from my educational institutes. Programming and software development happen at two levels, application level development and system level development.

For application development, most freshers in India are mass recruited by companies to claim a sitting bench of programmers and to get more projects. At the end of the day, there is compromised quality because the hiring process is utterly stupid.Sometimes, talent is wasted by making people good at their stuff work, on stupid things like creating Java frames and creating simple WinForm and ASP.NET UIs only (I am talking about fresher recruitments and as claimed by some, though I am not sure). If not considering good software engineering practices, that kind of coding can be done by a 7th grader.

But at the same time, there are independent programmers and developers who have a keen interest in things. They are like the unsung heroes who have lost all hope and are least interested in changing the world. All they want, is to make the most out of their skills, so it is all about the money and going abroad. While our courses are hugely limited to system software (C programming using TurboC!!! for 4 frigging years, stupid and vague C++ without proper object oriented concepts using cout in a C program is not C++ , ASM and more C programing using gcc), when in a company, we are mostly made to do application development (ASP.NET, WinForms, J2EE). Basically, a Computer Science engineer is made to do the job of a Software engineer. Yes, knowing computer science helps, but not knowing proper software engineering hampers the process too much, and there comes plummeting the whole system. It is a #fail.

I will cite a simple example. I joined a training institute for my final year project and they wanted me to create an ASP.NET website which would be something of an inventory system (hotel booking, CRM that kind of stuff). Yes it is not an easy task, but not worth working on a project in my opinion. It will just be reinventing the wheel and these projects are huge by nature in a real life. Delivered in 6 months by a group of 3, you can understand the kind of scaled-down unusable system that will result from this. The institutes do not stress too deep and they are more interested in ""not scaring the student telling them too much"" and ""giving an overview, and letting them learn the rest on their own"". At the end, what people develop in projects is not even a fully tested prototype, let alone put it up for real life usage.

I took my own topic, a voice guided real-time navigation system. I am using WPF, Google maps API and all the latest in tech that I can. For good Software engineering practice, I am using source control, using MVVM and will give a thorough look at anything else that I come to know of. I am 21 years of age and am a graduate. I guess at my age, people in west are still in the learning phase and become graduates at a later age. That makes western graduates so much better and more knowledgeable. We have quantity but no quality.

In India, the level of work I am doing for my project is generally not expected of a final year undergraduate project. But, I will do it because I want to. At the same time, there are others in my group who are comfortable doing a project in ASP.NET, make 5-7 pages, run database queries, fill up grid-views and not give a damn about security. Hell, even those freelancing websites have better job postings (YouTube clone, Google instant + X = Y Mashup..)

Six months down the line, you will find the same people working in a company that you outsource your business to and you will find me there too. People like them, outnumber people like me ten to 1 :(

to be exact and not ranting, in my whole educational career and acquaintance with over ~500 people, I have seen exactly 4 who had the level of expertise that I would consider them for working on a project with me)

Ultimately, all Indian graduates will write good documentation because it is theory, but do not expect any fool-proof code from them.

Coming to system software, the same is the case. A friend of mine is working with the Android NDK and is working on a live project at a company. He is fortunate to have got this project and I envy him, but this level of work happens in India too. Another senior at my college developed a kinect clone (multi-touch mouse, like in minority reports) in his final year project using just 2 cheap webcams. Equally, there are others who copy codes from the Internet and somehow get a degree reinventing the wheel.

My final word, do not expect a compromised quality all over India, and do not take Indians for granted as cheap software maintainers and suitable for outsourcing only maintenance job.

Also, do not expect that someone who has a good educational background in terms of marks to write good software. India's education system is all theory oriented, there is no stress on practical, sometimes, knowing more or the willingness to know more can land you in trouble from teachers who feel intimidated. Nevertheless, good programmers look for greener pastures in a better career and not just a good job though, there are others who want to land up a good ""job"", drive around a Honda City, eat out at Mainland China and live happily ever after.

I am more into Audi btw. :)

","18208","","18208","","2011-03-04 15:58:32","2011-03-04 15:58:32","","","","1","","","2011-02-24 07:31:42","CC BY-SA 2.5" "142393","2","","142390","2012-03-31 18:00:43","","65","","

I don't know anything about Blub itself, but I've been in a similar situation where there was something about my job that I think should be fixed, but don't want to burn bridges. Here are a few ideas that may help.

  1. Try to fix the issue. Explain to your boss that you think Blub is a bad decision for the health and growth of the company. Provide specific cases and instances where it's hurting the company (or where some other platform would help the company better). Suggest an alternative that you feel is superior and be ready to back it up with facts (remember - objective data). This will allow you to voice your concerns and gauge how your boss responds and how open he is to different technologies (or, how married he is to Blub). You may also gain some insight into why the company is using Blub and sticking with it. It will also give you a gauge of whether it's worth sticking out through it, if the company has decided to change technologies. (Note - this may depend on your boss. Obviously, this won't work if he's in love with it and thinks it's the future of technology.)

  2. Hold out until you get a job offer. You've dealt with it until now, so find a new job and wait to leave until you get an offer. This gives you an easy out - ""I've been offered a position that better suits my career goals"" (or some other more neutral line). Granted, this doesn't necessarily help your current company, but it's also not entirely up to you to fix the matter.

  3. Say you want to take your career in a different direction. Explain that you would prefer to work on a different platform and that Blub isn't your cup of tea. This allows you to say something along the lines of ""I don't like it,"" without getting into the religious debate of code languages/platforms. As Paul said in his answer, it keeps the reasons for you leaving close to you and reduces the chance of people taking it personally.

  4. Make it clear that it's not the office environment. Make sure your boss and coworkers know that you enjoyed working with them. Offer to connect with them on LinkedIn if you haven't already. Try to keep in touch with them as part of your professional network.

As for your successor and documentation, simply make sure all the issues/quirks that you know of are documented somewhere, either in the code or in a wiki or some other structured documentation platform. Explain in comments why you did something a certain way and be matter-of-fact about it - ""doing it this way because our version of Blub doesn't support Alternative Method X."" If your successor is familiar with Blub and doesn't mind it, then they're not going to heed any kind of ""stay away!"" messages. Someone not familiar with it is probably going to think you're just one of those platform elitists and ignore overt messages, and someone who is familiar with Blub and doesn't like it, or is on the fence, will either already sway to your side after more experience, wouldn't have applied to the position, or would ignore your ""stay away!"" messages, anyway.

","19699","","147","","2013-09-05 14:11:40","2013-09-05 14:11:40","","","","6","","","","CC BY-SA 3.0" "360460","2","","360454","2017-11-08 17:05:09","","1","","

It has to do with how you feel about delete.

In Aggregation, both the entries can survive individually which means ending one entity will not effect the other entity

When there is a composition between two entities, the composed object cannot exist without the other entity.

geeksforgeeks.org: Association Composition Aggregation

So if the legal team stormed the IT department with a court order and forced you to delete a customer record do you want to automatically delete the customers invoices as well, even if that's not part of the court order?

There are many reasons to delete. It's the meaning of the delete that dictates how it should cascade through the system. How will the system behave if invoices point to customers that don't exist? Is that how you want it to behave?

","131624","","-1","","2020-06-16 10:01:49","2017-11-12 23:29:45","","","","5","","","","CC BY-SA 3.0" "360635","2","","360583","2017-11-11 17:07:43","","2","","

The architect needs to stop designing the system for everybody else

For now, the architect is the only person who really understands what's going on; by allowing the architect to continue to provide designs to everybody else, this problem can only get worse. Documentation isn't going to change this - the architect will always be a bottleneck as long as they are the only one involved in the design process.

Suggest to the architect that s/he should slip back to an advisory role whereby they are no longer responsible for taking these decisions, but instead spending their time mentoring the team, answering questions, reviewing requirements, providing feedback on design proposals and making time to talk other members of the team through the codebase.

Focus on the problem of knowledge sharing, not on documentation

The team may decide that some documentation is useful for their future reference, however this should be motivated by their needs in understanding the system. Keep an open mind about the format of any technical documentation - the team may find find that it's more beneficial to maintain some kind of Wiki or Notes repository where anybody can add useful snippets of information, rather than trying to work around formal design documents.

Avoid taking a simplistic approach of assigning documentation tasks; you're more likely to end up with a large quantity of rather useless documents that nobody will ever read, until somebody digs it up one day and finds that it's heavily outdated, filled with obsolete information, and ends up being deleted.

Instead, put an emphasis on collaboration across the whole team (including the architect) for any matters involving design and architecture - no single person should be responsible for unilaterally taking such decisions, as this is how ""silos"" of knowledge are created in the first place. Documentation can be a by-product of this approach, wherever it feels appropriate; for example, the team may spend time in a meeting room with a Whiteboard, and may decide that the information on the whiteboard deserves being transcribed into a document (or maybe just a photo of the whiteboard will be enough.)

Open up the system design process to the whole team

Ideally, everybody in the team should be able to have the opportunity to be involved in any discussions or decisions about design/architecture. Not everybody needs to be an expert at everything, but it works best when everybody stays in-the-loop and is encouraged to get involved or provide feedback when new requirements emerge.

When new work is assigned, choose another member of the team to be propose a design, and ask other members of the team to be involved in the review and feedback process. The more people in the team who are actively responsible for continued growth/evolution of the system, the easier it will be for everybody to work on the project.

Reset the architect's role in the team

While the architect will always have the loudest voice, their position should be about enabling the team to make the right decisions, rather than making a decision for the team. For example, the architect may step in when the team can't agree on something. The architect also needs to guard against flawed designs which might risk the stability of the system (the architect is still ultimately responsible for the integrity of the design, so they always have the power to say ""no"", although it's important that the team understand the reason why something is bad/wrong).

By opening up the design process to the whole team, there are more opportunities for knowledge and ideas about the design and architecture of the system to spread and 'cross-pollinate' to the rest of the team. Over time there should be fewer issues; spreading knowledge means the team and the architect should start to align, and are more likely to converge on these decisions.

This will naturally lead to everybody in the team asking a lot more questions to the architect at first; the architect will always remain a key player in the whole process, but in time the number of questions will decrease and they will cease to be a bottleneck

Some documentation is nearly always valuable

Documents filled with class diagrams, flow diagrams and other such banalities are unlikely to do anything other than burn a whole load of time for something that nobody will ever read, but projects often rely heavily on other kinds of documentation which sit at a higher level than the codebase.

For example:

  • Requirements and user/stakeholder expectations - Everybody involved in a project needs to be in mutual agreement about this; it's hugely important for requirements to be captured and agreed somewhere, otherwise you can easily end up with misunderstandings between the team and the end user or stakeholders.
  • Acceptance Criteria - developers need to fully understand the bar against which their solutions will be measured. Acceptance criteria should be agreed with stakeholders, so it needs to be documented so that developers have something unambiguous to refer to when testing their solution
  • System architecture - it's often useful to have a high-level view which shows relationships between the top-level system components such as databases, web services, 3rd-party APIs, hardware modules, etc. It's also useful to describe the interfaces between those components, as well as describing the function of the main system modules.
  • Functional Specification - This should describe of the main features of the system, and enough detail that somebody unfamiliar with the system can understand what those features are for and how to use them. Also consider information such as how to deploy and configure the system, how to start troubleshooting user problems, where to find diagnostic logs, how to backup the system, etc.
","51489","","","","","2017-11-11 17:07:43","","","","1","","","","CC BY-SA 3.0" "143249","2","","12401","2012-04-06 01:53:49","","1","","

IMO, robustness is one side of a design trade-off not a ""prefer"" principle. As many have pointed out, nothing stinks like blowing four hours trying to figure out where your JS went wrong only to discover the real problem was only one browser did the proper thing with XHTML Strict. It let the page go to pieces when some portion of the served HTML was a complete disaster.

On the other hand, who wants to look up documentation for a method that takes 20 arguments and insists they be in the exact same order with empty or null value place holders for the ones you want to skip? The equally awful robust way to deal with that method would be to check every arg and try to guess which one was for what based on relative positions and types and then fail silently or try to ""make do"" with meaningless args.

Or you can bake flexibility into the process by passing an object literal/dictionary/key-value pair list and handle the existence of each arg as you get to it. For the very minor perf tradeoff, that's a cake and eat it too scenario.

Overloading args in intelligent and interface-consistent ways is a smart way to be robust about things. So is baking redundancy into a system where it's assumed packet delivery will regularly fail to be delivered in a massively complicated network owned and run by everybody in an emerging field of technology with a wide variety of potential means for transmission.

Tolerating abject failure, however, especially within a system you control, is never a good tradeoff. For instance, I had to take a breather to avoid throwing a hissy fit in another question about putting JS at the top or bottom of the page. Several people insisted that it was better to put JS at the top because then if the page failed to load completely, you would still potentially have some functionality. Half-working pages are worse than complete busts. At best, they result in more visitors to your site rightly assuming you're incompetent before you find out about it than if the busted up page is simply bounced to an error page upon failing it's own validation check followed by an automated e-mail to somebody who can do something about it. Would you feel comfortable handing your credit card info over to a site that was half-busted all the time?

Attempting to deliver 2010 functionality on a 1999 browser when you could just deliver a lower tech page is another example of a foolhardy design tradeoff. The opportunities blown and money I've seen wasted on developer time spent on bug-ridden workarounds just to get rounded corners on an element hovering above a !@#$ing gradient background for instance, have completely blown me away. And for what? To deliver higher tech pages that perform poorly to proven technophobes while limiting you choices on higher end browsers.

In order for it to be the right choice, the choice to handle input in a robust manner should always make life easier on both sides of the problem, in the short and the long term IMO.

","27161","","","","","2012-04-06 01:53:49","","","","1","","","","CC BY-SA 3.0" "361212","1","361217","","2017-11-24 13:52:32","","2","542","

Summary

Our development/support team creates applications for company employees (invoicing, task management and the likes).

We have a recurring issue where users misuse the applications they're provided, using workarounds, stepping outside of business process boundaries, leaving data or automated processes in poor states, often hard to recover. It's quite obvious they compensate for features perceived as missing, or bad UX. Sometimes, though, they just make mistakes following procedures.

Business managers are aware of this and lament the fact, but nothing changes.

We are aware that our apps are not state-of-the-art. There are bugs and less-than-ideal UIs. Lack of resources also means it's not evolving rapidly.

The behavior is generating extra support load, making us less likely to deal with root issues. It also often generates requests for workaround features, rather than requests to fix the root issue.

What can be done, from the IT side, to prevent or minimize destructive user workarounds?

(I'm not sure much can be done at the developer level, so any management-level actions are welcome)


Examples

  • Because the user account for a new hire was not created in time, the user logged information in another account ""in the meantime"". Now user wants IT to transfer said information to his proper account (a purely in-DB manual task)
    • Enabled by: logging app allows any user to log information in any user's account (very low security). User account creation is late (reason unknown, done by another team)
  • Users make use of features in one application that mess up our automated process because our app cannot deal with the extra or missing data, or simply never receives calls for the unimplemented features. Users have been told by management not to use these features. IT has been told not to bother blocking the extra features. Users still use the extra features. IT has to fix things manually all the time.
  • Users misuse predefined fields because the field they need is missing (putting notes about a customer in its 3rd address line field for example). Users do not ask us to add the field. Management does not seem to prevent the behavior. IT gets in trouble when said notes suddenly appear on invoices because a new feature displays that 3rd address line, as expected.
  • User does not want to wait for us to develop his reports, thus asks to access DB and makes his own solution. Has led to further demands about specific fields and tables (instead of expressing business needs) and to execute pre-made queries (with many mistakes and no explanation as to what it's meant to do business-wise)

All of these seem to follow the same template: users won't/can't wait, they don't express their needs, they just come up with workarounds. I understand the temptation, even the need, as a user myself. But it has consequences: bad data, gets in the way of strict DB constraints, and in nearly every case it adds manual-fixing work to the team.

What bothers me is that it feels like being punished for something you did not do. It's like they cross the river instead of walking around to the bridge, and then complain to us that they're all wet and cold and request that we setup a raft service in that spot. Then complain because a bump made them drop their bag, and ask us to dive to get it back.

Isn't caving in with every request enabling the behavior further?

Solutions?

While I've come up with ideas to avoid the behavior, or the consequences, I'm unsure which might be more efficient, and which might get us (IT) in trouble.

  • Saying no to such requests. Users would have to formally request their initial requirement and wait for it to be ready. Workaround mess ups would be cleaned by users.

Anytime I suggest this, colleagues look at me with this look: ""that would be awesome, in an ideal world, but..."" It feels as if the business side has guns pointed at everyone's families. It's also not always possible to let users fix their messes. Lack of admin tools means we're often the only ones capable of cleaning the mess.

  • Locking forbidden features incompatible with our workflows and inciting formal requests for any improvement deemed necessary.

This has been an issue so far because features are hard or impossible to lock, or that the people who could lock them are ""busy"" (other teams not affected by the falldown)

  • Train the users (not controlled by IT)

Supposedly done multiple times, but has had NO effect whatsoever.

  • Punish the users when they step out of established processes (not controlled by IT)

Many memos sent as a reminder. No change. No harsher punishment than reminders has been dealt out.

  • Explain why it's bad, how much work it creates for IT

Repeated ad nauseum to business managers, who tried the two previous actions, to no avail. Not sure the actual users know the impact it has.


Bad developer, no cookie

In reply to many, and to hopefully save face a little: this question was both a sanity-check and a way to learn if anything can be done when the sensible thing (actually implementing the features users need) is removed from the table by management.

My team and I are working with legacy applications with less than ideal structures that makes it hard to fix and upgrade. New parts are thankfully done in a more sensible manner and rarely give us trouble, except when we're told to not focus on things like user experience or validating requirements.

We'd love to fix root issues, remove bugs, implement the rails and safeguards to guide users down the expected process paths, and so on... but there's never time for it because in spite of suggesting this many times, we're put on firefighting duty and creating new features or upgrading existing ones.

Some fixing happens when we edit existing code, but larger overhauls are needed to fix the bigger problems, and this requires time we're not given. It also often depends on application from other teams, and they're ""busy"" too.

I've recently given a reminder about the users using the extra features from the external app that our app can't deal with. Either we adapt our app, or the team in charge of the external app locks down the features. It's been acknowledged, as it often is... but now to see when/IF it ever gets a greenlight. In the meantime, we douse the fires.

This is the cradle in which the question was born: what does one do when you're not allowed to treat the disease but only the symptoms? I apologize that it was not clear but part of me wanted to see genuine reactions to the base problem, to reassure myself that I'm not the only one to think we're dealing with user needs very poorly.

I don't even condone most of the solutions put forth, I've just seen these applied so far and wondered if any really made sense. Clearly, no.

","79557","","79557","","2017-11-25 08:32:40","2017-12-21 07:36:51","How do you minimize destructive user workarounds?","","3","3","","2017-11-28 17:38:54","","CC BY-SA 3.0" "250703","2","","250699","2014-07-22 11:35:52","","14","","

The C++ standards committee is full of smart people that are fully aware of the amount of existing code and the consequences of introducing new keywords.

One of the aims of the committee is to keep as much existing code as possible working unchanged and that certainly plays a large role when deciding to add new keywords and how to name those keywords.

To my knowledge, when a new keyword is proposed, they perform an investigation over a number of large codebases to see how many conflicts that new keyword would create and if any of those conflicts would create a silent change in the behavior of the programs.

According to the proposal for adding nullptr, this spelling of the keyword resulted in the least amount of conflict with existing code:

  • Programmers have often requested that the null pointer constant have a name, and nullptr appears to be the least likely of the alternative text spellings to conflict with identifiers in existing user programs. For example, a Google search for nullptr cpp returns a total of merely 150 hits, only one of which appears to use nullptr in a C++ program.

    • The alternative name NULL is not available. NULL is already the name of an implementation-defined macro in the C and C++ standards. If we defined NULL to be a keyword, it would still be replaced by macros lurking in older code. Also, there might be code “out there” that (unwisely) depended on NULL being 0. Finally, identifiers in all caps are conventionally assumed to be macros, testable by #ifdef, etc.
    • The alternative name null is impractical. It is nearly as bad as NULL in that null is also a commonly used in existing programs as an identifier name and (worse) as a macro name. For example, a Google search for null cpp returns about 180,000 hits, of which an estimated 3% or over 5,000 use null in C++ code as an identifier or as a macro. Another favorite, nil, is worse still.
    • Any other name we have thought of is longer or clashes more often.

Given this analysis, I don't expect that that much adjustment needs to be done.
But otherwise, yes you are expected to update your codebase for the backwards-incompatible changes when moving to a new standard.

","5099","","5099","","2014-07-22 13:52:26","2014-07-22 13:52:26","","","","2","","","","CC BY-SA 3.0" "53350","2","","53287","2011-03-01 15:00:27","","3","","

I'd say in just about every situation - try to find the common ground. Ideally at least, the technical lead wants quality code to be created in a timely manner. Anything that makes the code better, or the process faster is a win. Sometimes it just has to get boiled up to that level.

Code reviews pose an extra challenge - they can be expensive in both the time for the attendees to prep and attend, and in the interruption to flow (the point where you're really humming in development tasks).

In this particular scenario - I would avoid saying to the manager that the formatting work is ""trivial"". As others have pointed out - it's not trivial - consistent, easy to read code helps everyone out in the long run. BUT - most formatting work is not debatable. It's usually that someone found a problem to pretty clear coding guideline. As you say - the meeting could bette be spent on items where consensus is needed and were discusion is required.

I'd suggest the following:

  • Do your absolute best to go through the coding guidelines and submit code that is already formatted well. Unless your guideines are nebulous, you should be able to submit well-formatted code without a review.
  • Ask reviewers to markup the code for format BEFORE the meeting and hand you their markups in the meeting.
  • Don't invite discussion on the little stuff, accept and move on - just say you'll review the markups and make the updates and then start asking questions about the hard stuff.

There's a tricky point of trust here - you have to make sure the changes get in. If it's too trivial to warrant discussion, then should be so easy to change that you can update the code in an hour or two. If people come to believe that their markups aren't getting updated in the code, then they will feel the need to voice it in the meeting and there you are... back on formatting again.

","12061","","","","","2011-03-01 15:00:27","","","","0","","","","CC BY-SA 2.5" "250869","2","","250707","2014-07-23 13:52:37","","10","","

This is not intended to be a complete answer—there are already several very good ones mentioning important things like how to use your VCS and your project management software—but rather an addendum adding a few points I did not see in any others, which I find to be very helpful, and which I hope other people might find helpful as well.

1. No task is too soon or too small to write down

People usually make TODO lists for things that they plan to do in the future, but since programming requires concentration, and since we can be interrupted at any time, I've found it helpful to write down even what I'm doing right now, or what I'm about to start in a matter of seconds. You may feel you're in the zone and you couldn't possibly forget the solution that just hit you in that aha moment, but when your co-worker drops by your cube to show you a picture of his infected toe, and you are only able to finally get rid of him by starting to gnaw on your own arm, you may wish you had written down a quick note, even if only on a Post-It™ note.

Of course some other more persistent medium might be better (I'm particularly fond of OmniFocus), but the point is to at least have it somewhere, even if you'll finish in 20 minutes and then throw the Post-It™ away. Although you may discover that that information becomes useful, to put on time sheets or invoices to the client, or when your boss/client asks you what you've been working on and you can't remember. If you drop all of these notes in a box or drawer or folder, then when a big interruption hits—an interrupting project—then you can glance through them and remember a lot of the things you did to get your code to the point where you find it when you return to the project.

2. Use a whiteboard at your desk to capture big-picture ideas

I have a 3"" x 4"" whiteboard next to my desk, so when I start a project I can brainstorm the solutions to all the problems I perceive in a project. It could be architectural diagrams, use cases, lists of risks and obstacles, or anything that seems relevant to you.

Some more formalized approaches require you to generate diagrams and use cases and so forth as ""deliverables"" in some paper or electronic format, but I find that that can create a lot of extra work, and just become a series of sub-projects that end up being divorced from the actual purpose of the main project, and just part of a formalized process that you have to do but that no one pays much attention to. A whiteboard is the simplest thing that actually works, at least in my experience. It is as persistent as you want (with a camera) and most importantly allows you to get your ideas down immediately.

I think better with a pen in my hand, so dumping my thoughts onto a white surface comes naturally to me, but if you don't find that to be the case for you, here are some questions that may help you decide what is relevant:

  • If I were the lead developer, about to go on a honeymoon for 3 months while other developers completed the project, what general direction would I want to give them? What ideas would I want to make sure they knew about, or approaches would I want to ensure they took? What libraries or other helpful solutions would I want to be sure they were aware of?
  • If this project were my million-dollar idea that I knew would ensure my future financial independence, but I was scheduled for a critical surgery that would incapacitate me for 3 months, what would I want my future self to have, to ensure successful completion of the project?

(When I first scribble ideas down, I only worry about them making sense to my present self. Once they are down I can look more critically at them and make changes to ensure they make sense to my future self or to others. Worrying too much about communicating to others as you write them down initially can lead to writers' block—a mind clogged by competing goals. Get it down first; worry about clarity later.)

I recommend you spend the money to buy a decent whiteboard, at least 3"" x 4"", and hang it up in the space where you normally work. There are many advantages of a physical whiteboard over any virtual system.

  • It is large. By taking up a lot of space it makes its presence felt, and the plans on it feel like they are a part of you workspace, helping to point you in the right direction all the time.
  • It is there persistently: you don't have launch a certain app or web site to access it, and you won't risk forgetting how to get to it, or forgetting that it's there.
  • It is immediately accessible when you have an idea that you want to think through.

You lose many of the benefits if you just use a whiteboard in a meeting room, and then take a snapshot with your phone. If you make money by programming, it's well worth the cost of a decent whiteboard.

If you have another project interrupt the one that has filled up your whiteboard, you may need to resort to the snapshot on your phone, but at least you'll have that in 3 months when the ""urgent"" project is finished and you have to return to the other one. If you want to recreate it on your whiteboard then, it would probably only take 15 minutes, and you may find you can improve it a lot in the process, which makes that small investment of time very worthwhile.

3. Make stakeholders aware of the cost of interrupting a project

I find the metaphor of a plane helpful: starting and completing a project is like flying a plane. If you bail out mid-way through the flight, the plane will not just sit there in the air waiting for you to come back to it, and you need some way to travel from the current project/flight to the next one. In fact if you're in the middle of a flight from Phoenix to Fargo and you're told that you need to interrupt that flight to take another plane from Denver to Detroit, you'll need to land the first plane in Denver (which is fortunately not far from your flight path—not always the case with real interruptions) and someone has to figure out what to do with the cargo and passengers. They won't just sit and wait forever.

The point of this for projects is that transitioning from one project to another incurs a large expense of time and leaves a lot of lose ends that have to be dealt with.

In a project there is obviously and inevitably a lot that goes on in your head while you work and not every thought can be serialized to a written medium, and not every iota of those thoughts that are serialized will remain when deserialized. Although we can partially capture our thoughts in writing, it is very much a lossy format.

The problem (as I see it) is that project managers and other business people think of projects as a series of steps that can often be reordered at will (unless there is an explicit dependency on their Gantt chart) and can be easily distributed amongst people or delayed until it is most convenient for the business.

Anyone who has done any amount of programming knows that software projects cannot be treated like Lego blocks to be moved around any way you like. I find the metaphor of air travel at least gives stakeholders something concrete that they can think about that clearly cannot be treated as a series of disparate steps to be reordered on a whim. It at least makes it easy to understand your point that there is a cost to such interruptions. Of course it is still their decision, but you want to make them aware of this before they interrupt one project to give you another. Don't be combative, but offer helpful information and the helpful perspective of the developer, ready to do whatever they need from you, but just offering information that they might not be aware of if you don't tell them.


In short:

  1. Write down everything you're about to do, even if you don't think you could ever possibly need it written down. Even a short pencil beats a long memory.
  2. Brainstorm the big picture on a physical whiteboard that you have persistent access to.
  3. You might avoid project interruptions if you make decision makers aware that there is a cost to such interruptions, and at least you will have set expectations so they know the project will take a bit longer when you resume it.
","31367","","31367","","2014-08-19 15:04:19","2014-08-19 15:04:19","","","","2","","","","CC BY-SA 3.0" "251122","2","","251117","2014-07-25 10:27:59","","19","","

Writing computer code is a prime example of making decisions under uncertainty. There are always conflicting pressures, and how you decide in any given question depends on what pressures you perceive and how big you consider them.

Therefore, when a reviewer disagrees with your decision, that means they see different pressures/risks than you do, or they consider some of them larger/smaller than you do. You must absolutely talk about these differences, because not doing so perpetuates the problems that led to disagreement in the first place.

If your reviewer is more senior, their experience may correctly tell them that this or that risk is not very relevant in practice, or they may know that some kind of error has a long history of occurring in your organisation, and it's worth being extra careful to avoid it. Conversely, if you feel that you know something your reviewer probably doesn't, you must absolutely express that; keeping silent amounts to a dereliction of duty on your part.

The most important part of handling the situation is to understand that criticism of design decisions is virtually always not a criticism of someone's personality. (In the rare cases where it actually is, you'll notice that soon enough, and if you truly cannot please somebody, nothing you do makes any difference, so where's the harm in following best practices? Far better to find a better position as soon as possible.) It is just a result of different people having different perceptions of the many risks and rewards involved in computer code, and given how complex modern computer code is, that is only to be expected. Talking about the differences helps improving the code and improving your cooperation in this case and in future cases.

","7422","","","","","2014-07-25 10:27:59","","","","0","","","","CC BY-SA 3.0" "251248","1","401672","","2014-07-26 10:02:15","","1","544","

I am creating a solution where I essentially put all rules regarding communication with customers (including automatic invoicing, reminder emails, welcome emails, etc.) into a Google Sheets and use Ultradox to create emails and PDFs based upon Google Docs templates. For the three automatic emails I have currently implemented this is working out really well, the whole thing is very transparent to our organization since even non-technical people can inspect and correct the ""Excel""-formulas.

My concern is that in 2-3 years we will probably have 200 unique emails and actions that we need to send out for the various occasions and given the various states that customers can be in. Of course I could aim at limiting the number of emails and states that our customer can be in, but this should be choice based upon business realities and not be limited by the choice of technology.

My question is therefore, what are the limits of complexity (when will it become unmaintainable) that can be reasonably implemented in a solution based upon Google Apps Scripts and Google Sheets, given that I will attempt to expose as many of the rules as possible to Google Sheets? And what pitfalls should I be aware of when basing myself on a spreadsheet formulas, and what strategies should I follow to avoid the pitfalls?

Some of my own strategies So far I have come up with the following strategies to increase maintainability:

  1. Using several Google Sheets, each with its own purpose, each with its own dedicated ""export"" and ""import"" sheets so it is clear, which columns are dependent on the Google Sheet. Such sheets also help maintain referential integrity when inserting columns and rows.
  2. Using multi-line formulas with indentation for formula-readability
  3. Experimenting with the ""validation"" function to reduce the variability of data
  4. Experimenting with Arrayformulas to ensure that formulas will work even if additional rows are added
  5. Potentially offloading very complex formulas to Google Scripts and calling them from spreadsheet formulas
  6. Using Named Ranges to ensure referential integrity

Please notice that I am not asking about performance in this question, only maintainability.

Also, I am unsure of how software complexity can be measured, so I am unsure of how to ask this question in a more specific way.

","5094","","5094","","2014-07-28 08:58:53","2020-11-08 10:07:09","Complexity limits of solutions created in Google Spreadsheets","","1","0","1","","","CC BY-SA 3.0" "144557","2","","144556","2012-04-15 16:11:41","","3","","

Its probably better for a team that writes a lot of code. It allows you to worry about writing code, leaving the CI server to automatically run release builds, check dependancies and some static analysis - if you're doing your job right, it'll be silently compiling your checkins. If you're getting things wrong, it'll email you what errors it's finding.

If you get a lot of emails from it, then its obvious you're not working together very well, the emails will tell you what areas you need to look into, whether that's because you're writing poor code, or checking in code that breaks the other's code.

","22685","","","","","2012-04-15 16:11:41","","","","0","","","","CC BY-SA 3.0" "362191","2","","362179","2017-12-11 20:37:30","","1","","

Neither of the options you have presented are good OO. If you are writing if statements around the type of an object, you are most likely doing OO wrong (there are exceptions, this isn't one.) Here's the simple OO answer to your question (may not be valid C#):

interface IAccount {
  bool CanResetPassword();

  void ResetPassword();

  // Other Account operations as needed
}

public class Resetable : IAccount {
  public bool CanResetPassword() {
    return true;
  }

  public void ResetPassword() {
    /* RESET PASSWORD */
  }
}

public class NotResetable : IAccount {
  public bool CanResetPassword() {
    return false;
  }

  public void ResetPassword() {
    Print(""Not allowed to reset password with this account type!"");}
  }

I've modified this example to match what the original code was doing. Based on some of the comments, it seems people are getting hung up on whether this is the 'right' specific code here. That is not the point of this example. The whole Polymorphic overloading is essentially to conditionally execute different implementations of logic based on the type of the object. What you are doing in both examples is hand-jamming what your language gives you as a feature. In a nutshell you could get rid of the sub-types and put the ability to reset as a boolean property of the Account type (ignoring other features of the sub-types.)

Without a wider view of the design, it's impossible to tell whether this is a good solution for your particular system. It's simple and if it works for what you are doing, you will likely never need to think much about it again unless someone fails to check CanResetPassword() prior to calling ResetPassword(). You could also return a boolean or fail silently (not recommended). It really depends on the specifics of the design.

","209331","","209331","","2017-12-12 17:12:15","2017-12-12 17:12:15","","","","25","","","","CC BY-SA 3.0" "145381","2","","145268","2012-04-20 17:09:53","","4","","

I would go from high level to low.

Demo the app as soon as possible

One of the most important things is that the developer has an idea what they will work on. During the demo, point out some of the things that have been under recent development, and the direction the app is going.

Explain the high level architecture

This is also very important. Allow the new dev to listen and ask questions. Do this as a group exercise with the other devs, who will hopefully chime in and help you out. This will let the new developer know that it is OK to speak up openly and honestly.

Have a great on-boarding document ready

Having a great on-boarding document does not only help new devs, but old ones as well. It can contain expectations, useful links and environment setup information. (I cannot tell you how many times I used our on-boarding to set up my environment when I get a new computer...) This should be well structured and to the point and not linger on and not be a dumping ground for every little detail.

Encourage him/her to ask questions (and be available to answer them)

With the answers, guide them, but do not tell them what to do. Give them hints but allow them to finally figure it out themselves.

Help the other team members welcome the newcomer

There is two sides of the coin when someone joins a team. The team needs to have the tools to welcome the new developer as well.

Let them pick up a small task or two

Allow them to add something new and visible to the project that is demo-able. When it is demoed, call out who did it and what a good job they did. This can really boost self-esteem. The faster they feel like they are adding value, the faster they feel they are part of the team. The faster they will feel empowered to do the best they can.

Encourage them to getting into harder tasks once they feel more and more comfortable

Good candidates will do this naturally.

","16992","","16992","","2012-04-20 17:15:24","2012-04-20 17:15:24","","","","0","","","","CC BY-SA 3.0" "252287","2","","252253","2014-08-04 20:33:23","","2","","

If you consider the main reason people buy Office is to keep compatibility with all the existing documents, many of which have macros and VBA in them, it would be a very brave Microsoft to treat those users like they did the VB6 crowd and tell them to suck it up and start coding in .NET, just take a look at the #1 uservoice request ever!

I imagine the LibreOffice guys would cheer themselves into unconsciousness though!

VBA is for productivity in Office, not ""programming"". The day you need more power from your documents is the day you hire a programmer to rewrite everything. I guess another reason is why Visual Studios macros are not .NET either - think of the devenv4 COM object as not much different to VBA.

","22685","","","","","2014-08-04 20:33:23","","","","1","","","","CC BY-SA 3.0" "362611","2","","362610","2017-12-18 07:19:48","","16","","

Often, introducing a variable just to name some result is very helpful when it makes the code more self documenting. In this case that's not a factor because the variable name is very similar to the method name.

Note that one line methods don't have any inherent value. If a change introduces more lines but makes the code clearer, that's a good change.

But in general, these decisions are highly dependent on your personal preferences. E.g. I find both solutions confusing because the conditional operator is being used unnecessarily. I'd have preferred an if-statement. But in your team you may have agreed on different conventions. Then do whatever your conventions suggest. If the conventions are silent on a case like this, notice that this is an extremely minor change that doesn't matter in the long run. If this pattern occurs repeatedly, maybe initiate a discussion how you as a team want to handle these cases. But that is splitting hairs between “good code” and “perhaps a tiny bit better code”.

","60357","","","","","2017-12-18 07:19:48","","","","7","","","","CC BY-SA 3.0" "252388","2","","252386","2014-08-05 18:51:28","","8","","

I believe it is somewhat bound up with development of technology and applications.

The case

Text sent over serial communications, goes first character first, second character second, and so on. IMHO, it makes no sense to do otherwise because we can start reading as soon as the message starts, and we don't need any extra layout information. For western languages, that is top left, to bottom right. This started with the telegraph. The West developed much of the technology, so it makes sense that we made it easy for ourselves.

Text stored in computer memory is easiest to receive low address to high address, and easiest to manipulate when it is in the same order as we read it, first character at the lowest address, last character at the highest address.

Early text terminals did not store pixels, but instead stored character codes, and converted character codes to pixels on the fly (in hardware). So storing the received character codes in screen memory was the simplest option. Early personal computers did the same, and for some personal computers, the screen memory was just ordinary memory. So it uses less resources, or is easier to make, and easier to program for screen memory and in-memory text to store characters in the same order.

Screen memory must be serialised to an image representation in the order the screen needs it, i.e. for the electron gun to paint it. Technically, the electron-gun scan direction may be arbitrary, and could start in any corner, but IMHO culturally it makes sense for western language readers. Hence existing display devices (CRT's painting top-left to bottom-right) worked perfectly as computer displays.

It makes sense for western languages to have the text 0, 0 origin for a text display start at the top left.

IMHO, it makes a lot of sense to match a graphics device 0,0 origin to the textual display origin (though, IIRC, some devices did not!).

More Detail

The in-memory screen image is a chunk of memory with linearly increasing addresses. So the electronics to retrieve memory for display would be a more convenient if the screen-refresh counter started at '0' (i.e. the start of image memory), and just incremented in synch with drawing the screen. Simple DMA systems do this. It would then make some more sense that the graphic coordinate system map to that memory address simply. Minimum complexity would argue for aligning screen refresh addressing with graphics coordinate addressing; and they would share an origin.

The opposite case, where the two addressing schemes work in the opposite direction seems to increase hardware complexity for no apparent hardware benefit. At the time these decisions were being made, hardware was expensive.

However, I think the story starts with text terminals, remote text display devices for shared computers. Some old fashioned 'dumb character terminals' (which were CRT based) were addressed from the first line down, when they moved to a character position. So if you think about textual displays as a starting point, it might make more sense than starting with graphics.

Top-left origin makes sense for things like textual menu's and textual user interfaces (e.g. forms), which is typically read top to bottom left to right in western languages.

It is easy to write text onto a screen starting top left (western language), and it is somewhat independent of screen resolution; just let the device fold text or scroll when a boundary is reached.

Further, it took quite a lot of processing power to scroll the text up the screen (yes, seriously), and that visual effect was quite unpleasant (scrolling an entire row of text by a character height, on a long persistence phosphor screen), so it would make some sense to make it easy to start writing at the top of the screen, and minimise scrolling.

Typically the screen could be cleared completely, and the cursor set top left with a single command. Also it might be using communications systems with under 300 characters/second, 2400 baud (I know people who read faster than that).

Those display devices would typically wrap onto the next line automatically and silently (yes, seriously).

So it takes no maths to write (western) text, top to bottom, staring at the top, the devices can be quite simple, and the comms quite slow.

So, the display is doing some of the work autonomously, it is not all done by the program, and in-terminal processing is limited. It might even struggle to receive text, and write it if it had to scroll every line it receives.

The terminal is connecting to the computing system over public telephone lines, using modems, their is limited bandwidth (say 2400 baud). The shared computer might not know what type of terminal is connecting.

It is practical to write from the top left, using little more than tab, line-feed, return and clear screen. As long as the developer ensures the textual menu does not scroll off the screen on the smallest screens, then it is somewhat deice and screen resolution independent. The effect is quite pleasing because the screen text 'stays still', making it easy to read before the whole screen is finished, rather than flickering with scrolling.

Driving round the screen (e.g. using cursor keys), or jumping to a specific character location is straightforward because the in-memory coordinates of the text and the screen coordinates are 'the same'.

","51434","","-1","user40980","2020-06-16 10:01:49","2014-08-06 02:57:53","","","","5","","","","CC BY-SA 3.0" "363314","2","","363307","2018-01-03 05:45:19","","23","","

No.

I'd probably call that premature optimization, in a broad sense, regardless of whether you're optimizing for performance, as the phrase generally refers to, or anything else that can be optimized, such as edge-count, lines of code, or even more broadly, things like ""design.""

Implementing that sort of optimization as a standard operating procedure puts the semantics of your code at risk and potentially hides the edges. The edge cases you see fit to silently eliminate may need to be explicitly addressed anyway. And, it is infinitely easier to debug problems around noisy edges (those that throw exceptions) over those that fail silently.

And, in some cases, it's even advantageous to ""de-optimize"" for the sake of readability, clarity, or explicitness. In most cases, your users won't notice that you've saved a few lines of code or CPU cycles to avoid edge-case handling or exception handling. Awkward or silently failing code, on the other hand, will affect people -- your coworkers at the very least. (And also, therefore, the cost to build and maintain the software.)

Default to whatever is more ""natural"" and readable with respect to the application's domain and the specific problem. Keep it simple, explicit, and idiomatic. Optimize as is necessary for significant gains or to achieve a legitimate usability threshold.

Also note: Compilers often optimize division for you anyway -- when it's safe to do so.

","94768","","94768","","2018-01-04 16:08:12","2018-01-04 16:08:12","","","","17","","","","CC BY-SA 3.0" "363678","2","","363655","2018-01-08 21:17:32","","6","","

In Java, static final constants can be copied, by the compiler, as their values, into code which uses them. As a result of this, if you release a new version of your code, and there is some downstream dependency that has used the constant, the constant in that code will not be updated unless the downstream code is recompiled. This can be a problem if they then make use of that constant with code that expects the new value, as even though the source code is right, the binary code isn't.

This is a wart in the design of Java, since it's one of very few cases (maybe the only case) where source compatibility and binary compatibility aren't the same. Except for this case, you can swap out a dependency with a new API-compatible version without users of the dependency having to recompile. Obviously this is extremely important given the way in which Java dependencies are generally managed.

Making matters worse is that the code will just silently do the wrong thing rather than producing useful errors. If you were to replace a dependency with a version with incompatible class or method definitions, you would get classloader or invocation errors, which at least provide good clues as to what the problem is. Unless you've changed the type of the value, this problem will just appear as mysterious runtime misbehavior.

More annoying is that today's JVMs could easily inline all the constants at runtime without performance penalty (other than the need to load the class defining the constant, which is probably being loaded anyway), unfortunately the semantics of the language date from the days before JITs. And they can't change the language because then code compiled with previous compilers won't be correct. Bugward-compatibility strikes again.

Because of all this some people advise never changing a static final value at all. For libraries which might be distributed widely and updated in unknown ways at unknown times, this is good practice.

In your own code, especially at the top of the dependency hierarchy, you will probably get away with it. But in these cases, consider whether you really need the constant to be public (or protected). If the constant is package-visibility only, it's reasonable, depending on your circumstances and code standards, that the entire package will always be recompiled at once, and the problem then goes away. If the constant is private, you have no problem and can change it whenever you like.

","292832","","","","","2018-01-08 21:17:32","","","","0","","","","CC BY-SA 3.0" "58212","2","","54691","2011-03-15 14:08:28","","4","","

I've copied and pasted the top 10 from a recent blog post:

  1. Servant Leader – Must be able to garner respect from his/her team and be willing to get their hands dirty to get the job done

    Communicative and social – Must be able to communicate well with teams

    Facilitative – Must be able to lead and demonstrate value-add principles to a team

    Assertive – Must be able to ensure Agile/Scrum concepts and principles are adhered to, must be able to be a voice of reason and authority, make the tough calls.

    Situationally Aware – Must be the first to notice differences and issues as they arise and elevate them to management

    Enthusiastic – Must be high-energy

    Continual improvement - Must continually be growing ones craft learning new tools and techniques to manage oneself and a team

    Conflict resolution - Must be able to facilitate discussion and facilitate alternatives or different approaches

    Attitude of empowerment - Must be able to lead a team to self-organization

    Attitude of transparency – Must desire to bring disclosure and transparency to the business about development and grow business trust

","8874","","8874","","2011-05-02 23:25:57","2011-05-02 23:25:57","","","","4","","","","CC BY-SA 3.0" "147116","2","","147059","2012-05-03 16:37:51","","5","","

Leaky abstraction

Why should an interface specify which exceptions can be thrown? What if the implementation doesn't need to throw an exception, or needs to throw other exceptions? There's no way, at an interface level, to know which exceptions an implementation may want to throw.

Nope. Exception specifications are in the same bucket as return and argument types- they are part of the interface. If you can't conform to that specification, then don't implement the interface. If you never throw, then that's fine. There's nothing leaky about specifying exceptions in an interface.

Error codes are beyond bad. They're terrible. You have to manually remember to check and propagate them, every time, for every call. This violates DRY, for a start, and massively blows up your error-handling code. This repetition is a far bigger problem than any faced by exceptions. You can never silently ignore an exception, but people can and do silently ignore return codes- definitely a bad thing.

","8553","","","","","2012-05-03 16:37:51","","","","6","","","","CC BY-SA 3.0" "253950","2","","253925","2014-08-20 23:40:23","","4","","

It would be shady to allow people to draw incorrect conclusions about the authorship of whatever code the fork ships, even if the legalities are covered by providing the necessary notices and revision history for anyone who chooses to look closely. So maybe Xamarin's presentation is unethical, maybe it isn't, but I think that's the basis on which to judge it: does it mislead?

The license lays down permission to use the code and a requirement to include relevant copyright notices with copies of the code. That's all at quite a low level. It doesn't discuss how you should publicly summarize who contributed what, but just because that's outside the scope of the license and not part of the legal agreement doesn't mean anything goes ethically. Ethics vary, but giving honest credit where due is quite a widely-held principle so it's easy to see why failing to do so will give offence.

Like everyone says there's no intention in the MIT license to prevent forks, so that's not unethical of itself. If ""rebrand it as your own"" is code for ""make public claims of credit you don't deserve"", then sure that would be unethical if true.

As for preventing it happening to you: if you want to avoid the situation of someone else insinuating that your code is theirs, then you need a loud voice when claiming credit. If you want to avoid the situation of someone creating a fork of your code that might eventually prove more popular than your original (either due to their greater resources or just their focussing on the ""right"" user needs) then I think you're out of luck in OSS. You can't just decide to be right if another group wants different features in the software from what you want, and if (in the view of users) you're wrong then you should lose regardless of being there first. This is a consequence of the primary open source principle (or properly, free software principle) that the author doesn't control the software, the people who run it do.

","18658","","18658","","2014-08-22 10:31:07","2014-08-22 10:31:07","","","","0","","","","CC BY-SA 3.0" "58698","2","","58667","2011-03-16 15:50:35","","6","","

I'm 19 years old.

So? They were 19, too.

makes me think that I'll know so little of what they'd want me to know. Like they will expect so much.

Based on what? Any rational basis for this?

I'm scared that I'll freeze up, forget everything I know, and stutter like an idiot.

Really? Why? If you met them socially, what would you say?

Remember, an interview is a two-way street. They want to know you, and you want to know them.

You're not begging for a position. You're there to solve a particular problem they have.

Intern-scale problems are specifically identified, budgeted-for and set aside. Companies have an informal backlog of projects waiting for the next intern.

You're there to solve a problem that they set aside for an intern.

You need to know about them and the project they've set aside for you.

When I was in the interview, I was so nervous I couldn't think clearly.

To survive a SCUBA accident under water there are three rules.

  1. Stop.

  2. Breathe Normally.

  3. Think Logically.

These rules are universal. You can, during an interview, stop, breathe and think. Silence is a good thing. Cultivate it.

Proverbs 17:28: Even fools are thought wise when they keep silent; with their mouths shut, they seem intelligent.

Take time to think. Take time to know them as people.

","5834","","","","","2011-03-16 15:50:35","","","","2","","","","CC BY-SA 2.5" "254052","1","254082","","2014-08-21 19:53:14","","1","239","

My team currently has a project with a data access object composed like so:

public abstract class DataProvider 
{
     public CustomerRepository CustomerRepo { get; private set; }
     public InvoiceRepository InvoiceRepo { get; private set; }
     public InventoryRepository InventoryRepo { get; private set; }
     // couple more like the above
}

We have non-abstract classes that inherit from DataProvider, and the type of ""CustomerRepo"" that gets instantiated is controlled by that child class.

public class FloridaDataProvider 
{
     public FloridaDataProvider() 
     {
          CustomerRepo  = new FloridaCustomerRepo(); // derived from base CustomerRepository
          InvoiceRepo = new InvoiceRespository();
          InventoryRepo = new InventoryRepository();
     }
}

Our problem is that some of the methods inside a given repo really would benefit from having access to the other repo's. Like, a method inside InventoryRepository needs to get to Customer data to do some determinations, so I need to pass in a reference to a CustomerRepository object.

Whats the best way for these ""sibling"" repos to be aware of each other and have the ability to call each other's methods as-needed? Virtually all the other repos would benefit from having the CustomerRepo, for example, because it is where names/phones/etc are selected from, and these data elements need to be added to the various objects that are returned out of the other repos.

I can't just new-up a plain ""CustomerRepository"" object inside a method within a different repo, because it might not be the base CustomerRepository that actually needs to run.

","25364","","25364","","2014-08-21 20:01:19","2014-08-22 06:31:10","How to make the members of my Data Access Layer object aware of their siblings","","1","2","1","","","CC BY-SA 3.0" "363992","2","","186036","2018-01-15 05:29:26","","3","","

In my opinion validating inputs (pre/post-conditions, i.e.) is a good thing to detect programming errors, but only if it results in a loud and obnoxious, show-stopping errors of a kind which cannot be ignored. assert typically has that effect.

Anything falling short of this can turn into a nightmare without very carefully-coordinated teams. And of course ideally all teams are very carefully-coordinated and unified under tight standards, but most environments I've worked in fell far short of that.

Just as an example, I worked with some colleagues that believed that one should religiously check for the presence of null pointers, so they sprinkled a lot of code like this:

void vertex_move(Vertex* v)
{
     if (!v)
          return;
     ...
}

... and sometimes just like that without even returning/setting an error code. And this was in a codebase which was several decades old with many acquired third party plugins. It was also a codebase plagued with many bugs, and often bugs which were very difficult to trace down to root causes since they had a tendency to crash in sites far removed from the immediate source of the problem.

And this practice was one of the reasons why. It's a violation of an established pre-condition of the above move_vertex function to pass a null vertex to it, yet such a function just silently accepted it and did nothing in response. So what tended to happen was that a plugin might have a programmer mistake which causes it to pass null to said function, only to not detect it, only to do many things afterwards, and eventually the system would start flaking out or crash.

But the real issue here was the inability to easily detect this problem. So I once tried to see what would happen if I turned the analogical code above to an assert, like so:

void vertex_move(Vertex* v)
{
     assert(v && ""Vertex should never be null!"");
     ...
}

... and to my horror, I found that assertion failing left and right even upon starting up the application. After I fixed the first few call sites, I did some more things and then got a boatload more assertion failures. I kept going until I had modified so much code that I ended up reverting my changes because they had become too intrusive and begrudgingly kept that null pointer check, instead documenting that the function allows accepting a null vertex.

But that's the danger, albeit a worst-case scenario, of failing to make violations of pre/post-conditions easily detectable. You can then, over the years, silently accumulate a boatload of code violating such pre/post-conditions while flying under the radar of testing. In my opinion such null pointer checks outside of a blatant and obnoxious assertion failure can actually do far, far more harm than good.

As to the essential question of when you should check for null pointers, I believe in asserting liberally if it's designed to detect a programmer error, and not letting that go silent and hard to detect. If it's not a programming error and something beyond the programmer's control, like an out of memory failure, then it makes sense to check for null and use error handling. Beyond that it's a design question and based on what your functions consider to be valid pre/post conditions.

","","user204677","","user204677","2018-01-15 05:37:45","2018-01-15 05:37:45","","","","0","","","","CC BY-SA 3.0" "58710","2","","58667","2011-03-16 16:15:58","","2","","

Relax, as best you can. Take a deep breath. And tell yourself this:

""It's OK if I fail.""

Because, seriously. You're allowed to screw this up. You will not be sleeping in a cardboard box if you botch an answer. There will be other opportunities.

Besides, this might not even be that good a gig! If the lead developers DO have ridiculous expectations for a 19-year-old would-be intern? Pfft. Screw 'em. And pity whatever poor soul they settle for, because whoever it is will be forever failing to meet their expectations.

They have to impress you just as much as you have to impress them.

Ditto if they come down on you for being too nervous. If you're nervous, cop to it. If you need a little time to think, say you need a little time to think. If this gets held against you, screw 'em; job interviews make people nervous. If they've forgotten that, they're no longer accustomed to working with actual people.

But the best thing you can do for yourself is give yourself the freedom to fail. If that mean little voice gets the better of you, then you're practicing how to ignore it. If you come up with a better algorithm after the interview is over, then it's just something you can put in the back of your mind should you ever get a similar question. If you panic so badly you forget your name, refer to the interviewer by the name of your 9th-grade algebra teacher, and pretend you've temporarily lost the capacity to speak English just so you can buy yourself some time, then you've got a good anecdote for the next time you're swapping war stories with your buddies. ""You think YOU bombed your interview? Well, this one time I....""

Chill. Either you do well and get the job, or you get some practice so the next interview won't be so scary and traumatic. It's all degrees of Win.

","17","","","","","2011-03-16 16:15:58","","","","0","","","","CC BY-SA 2.5" "59387","1","59521","","2011-03-18 10:00:19","","86","12603","

One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies.

Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: ""my screen is on top of the table""; ""I (a human being) am sitting on a chair""; ""a car is on the road""; ""I am typing on the keyboard""; ""the coffee machine boils water"", ""the text is shown in the terminal window.""

We think in terms of bivalent (sometimes trivalent, as, for example in, ""I gave you flowers"") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance.

Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., ""the text is being shown by the terminal window"". Or maybe ""the text draws itself on the terminal window"".

Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher ""importance"" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are operationally insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a ""wrong"" choice can lead to convoluted and hard to maintain code.]

I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach.

Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it?

","6564","","6564","","2011-03-18 10:15:37","2015-04-13 23:04:41","Is OOP hard because it is not natural?","","22","9","44","","2011-09-22 16:08:31","CC BY-SA 2.5" "364618","2","","364572","2018-01-24 18:17:13","","7","","

I see two issues regarding asynchronous file IO:

  • Absense of async file IO on Linux.
  • Completion-based vs readiness-based async IO.

Linux provides syscalls io_setup, io_submit, io_getevents and few others to manage asynchronous file IO. It has following constraints:

  • File should be opened with O_DIRECT flag e.g. all operations bypass file cache. This alone makes it worthless for most applications.
  • Both file offset and buffer address should be aligned by 512 or 4096 bytes (depending on underlying filesystem). This is done to make it possible to read/write data directly to/from user buffer.

If user violates any of those constrains, io_submit will silently perform all operations synchronously.

I read somewhere on Nginx mailing list years ago that this API was implemented by Oracle for their database. They only needed asynchronous file IO that bypasses file cache (something databases do), so they left implementation incomplete.

POSIX provides aio_write, aio_read functions, but on Linux those are implemented in userspace using thread pool which makes existing implementations non-conforming (it is illegal to use those functions from signal handler, for example).

Completion-based vs readiness-based IO is not related to only files. Completion-based IO is when user gets notified about completion of the whole operation, while with readiness-based API user is only notified that reading or writing can be performed without blocking.

Completion-based IO is more general and can work with threads better. Readiness-based IO can only be used with non-blocking IO and thus cannot be used with files.

Completion-based IO can be implemented using readiness-based IO, but the opposite is not true. So if library provides readiness-based IO that works with sockets, it cannot provide the same interface for files.

On Windows the most efficient native asynchronous API is completion based and it is called overlapped I/O. Unix-like systems use primary readiness-based IO: epoll, kqueue, /dev/poll.

Linux has possiblity to get completion notification that can be through eventfd. But there is no point when there are so many limitations.

I think FreeBSD implements POSIX async IO in the kernel and allows you to receive completion notification through kqueue. I am not sure how good it is though.

","289860","","289860","","2018-01-24 22:29:09","2018-01-24 22:29:09","","","","2","","","","CC BY-SA 3.0" "60325","2","","60324","2011-03-21 01:11:08","","13","","
  • How might I interact differently with the management chain?

The large company will be more bureaucratic than you're used to. You'll interact with the layers above and below you; skips will be rare.

  • Do you see trends in quality or speed of development that differ between large and small?

You'll have more layers. You won't have admin access to production servers, so there will be more hand-offs. Communication channels and documentation and process will slow things down in the larger firm.

  • Thoughts on team developent vs. cowboy coding.

Irrelevant; both large and small can be either one.

  • Social Aspects.

Larger firms tend to be more conservative, because there's more to lose.

Larger firms have one big advantage: they know how to make payroll. Some of the smaller firms I worked with failed at it. Sales and keeping the revenue stream flowing can be a problem for a smaller firm.

  • Anything else.

You'll be one voice among many. Your influence will depend more on how well you can integrate yourself in with the movers and shakers.

","4316","duffymo","","","","2011-03-21 01:11:08","","","","1","","","","CC BY-SA 2.5" "148996","2","","148840","2012-05-17 07:48:51","","3","","

As mentioned by Explosion Pills, in a complex application most of the objects are relating to application components (e.g. database connection pools, commands, data structures such as hashmaps) rather than real world entities (such as a boarding pass, invoice, or mp3 file). There are many good books on design patterns that show you ways that people have solved a lot of recurring problems in this area. The GOF book as it is known is thorough but very dry, Head First Design Patterns may be more accessible.

In terms of real world analysis and design. It is often helpful to think in terms of nouns and verbs. For example a video lending library (are these obsolete now?) may have these things/nouns:

  • Video
  • Borrower

In terms of verbs:

  • A Borrower can take out a video for a length of time
  • A Borrower can return a video to the store etc.

These can then be turned into classes with operations (it's a long time since I've done any PHP so I'll avoid it):

class Borrower
{
  public void borrow(Video video, int daysToBorrow)
  {
     ...
  }

  public void returnVideo(Video video, boolean calculateFine)
  {
     ...
  }
}

It all takes a LOT of practice and playing around. The best thing to do is get stuck in and learn from failed designs. In my opinion OO is something that you can continue to learn and develop over your lifetime (it is not easy and there are no perfect solutions to anything). Good design is often iterative, so expect to try out a few different ideas for your ""Craig's List"" webapp.

","54113","","","","","2012-05-17 07:48:51","","","","0","","","","CC BY-SA 3.0" "255311","1","255313","","2014-09-03 22:25:32","","18","1268","

Today I realized painfully that for some decisions you need a good overall understanding of the system. Otherwise, the risk is too high that your assumption turn out to be wrong.

Imagine that you are a developer for an online web shop. To understand the system, you have to understand many connected subsystems, for example:

  1. How to receive and process product information from various suppliers
  2. How the customer can search and order products on your web shop
  3. How the orders are processed and managed by your customer service
  4. How your SAP system handles the invoicing process
  5. ...

The larger the system becomes, the more you have to understand.

If that knowledge is lacking, sub-optimal solutions where developed when specialized teams worked together. Specialized teams, which only understood their part of the system in detail.

To deal with that problem, our company changed the strategy, so that one development team always has to be responsible for all aspects of a feature. Even if it involves the complete process chain. (It is kind of a feature team, but not from teams working on different subsystems.)

What are effective strategies for developers to keep their system and operational process knowledge up to date?

I think good system documentation is a key, but I'm afraid that there is a point where the human mind cannot scale as fast as the system involves. At some point to have to simply, but that simplified assumptions can turn out to be costly mistakes. When you have to implement and maintain the code, you just have to know the exact details.

As a developer, I currently have to face a difficult conflict of interests:

  1. I need to spend more time to understand our system and operational process.
  2. I need to develop and maintain our code.

As time is limited, 2) mostly wins. The result is that I mostly gain deeper knowledge along the way, and some half knowledge from casual conversations.

Do you know how huge companies like Amazon solve that problem? I would assume that no single human is capable of understanding such a complex process, and be able to contribute code at multiple subsystems at the same time.

","76071","","","","","2014-09-03 23:21:32","When systems get larger and larger, how do you keep a global understanding of your system?","","1","1","6","2014-09-06 00:52:54","","CC BY-SA 3.0" "61329","2","","61296","2011-03-24 01:20:55","","4","","

I have interviewed a lot of people during my time as a software practitioner. I have reached a point where I believe that quizzes and toy programming assignments are a waste of valuable bandwidth. Quizzes and toy programming assignments only serve to test what the interviewer knows. They are not an accurate way to gauge what a candidate knows. At this point in my career, I only accept this type of nonsense if, and only if, I am given the opportunity to administer my own test at the end of the interview.

The best way to assess a software practitioner's capabilities is to talk to him or her in a calm reassuring voice. Ask the candidate to discuss what his/her current position entails. When the candidate brings up an area of interest, ask him/her to elaborate on that area. The goal here is to get the candidate to let down his/her guard. No amount of coaching can prepare a candidate for the ""soft touch"" interrogation. Sooner or later, the noose is going to tighten around the neck of a candidate who is trying to BS his/her way through an interview.

","17687","","","","","2011-03-24 01:20:55","","","","0","","","","CC BY-SA 2.5" "365374","2","","365310","2018-02-05 16:57:17","","3","","

I would advise against putting conditionals in your method names. If you want the calling code to read like a conditional, use a conditional:

if (cakeIsNeeded) buyCake();

Or, if ternary operators or short-circuits are your thing:

cake = cake == null ? buyCake() : cake;
cake = cake || buyCake();

Otherwise, you can either silently ignore repeat calls, use memoization, or throw exceptions in the method to deal with repeat calls — whatever feels most appropriate and least surprising for the particular method.

If you have a method name that feels like repeat calls would re-perform their action(s) but won't, one way to beat the ""surprise"" is to add an optional force, skipCache, or similar boolean parameter. (The name of the flag should be relevant to the method name and/or skip-logic.)

All that said, I'd tend to look for a verb that implies what the caller needs, rather than what the method does. In the example of cake, the caller wants the cake and doesn't care much where it comes from. It sounds to me like you just want to getCake() or findCake().

Both of those names communicate that Cake will be returned. They don't reveal to the caller how that Cake will be located. It could be purchased, taken from the counter, or made by magical elves. Those are implementation details.


One important caveat to all of this: These naming patterns tend to be highly idiomized. Refer to your language's internal libraries for examples of how they handle this. And talk to your team to decide on your own internal idioms.

","94768","","94768","","2018-02-05 17:56:29","2018-02-05 17:56:29","","","","0","","","","CC BY-SA 3.0" "366214","1","366322","","2018-02-19 15:47:20","","6","2492","

We are developing an application where providers can offer their products and consumers can buy them (sort of marketplace). We try to apply DDD concepts into our model design and the implementation follows a microservices style. This implies that the data belongs to a Bounded Context (BC) and only the microservices within that BC can access it. Outside that BC, specific information can only be either queried through a public interface of the BC or by subscribing to events published by that BC.

My question is about the design of the Orders. Orders are placed by consumers and accepted and fulfilled by providers. They can also be manipulated by customer service. An order right now contains only products from a single provider, but I might be asked in the future to support buying from multiple providers at once.

All implementations I've seen of similar systems contain a single Order model, which tends to be really bloated with information about the products, the provider, the consumer, invoicing, deliveries, payments, etc. I am trying to avoid that, but I am facing the question of "Who owns the order"?

I can think of the following answers:

  1. There is an Orders bounded context which is accessed by both the consumer and the provider. This means that the consumer API has a Place Order operation that talks to the Orders BC and creates an order and the Providers API has an operation like Accept Order which talks to the same Orders BC and changes the status of that same order model.
  2. There are 2 Orders BCs: Consumer Orders and Provider Orders. The Consumer API places an order in the Consumer BC. This creates the order and publishes a ConsumerOrderCreatedEvent. The ProviderOrders BC listens to that event an creates a local Order (ProviderOrder) which references the ConsumerOrder. Through the Provider API, the order can be accepted, which will publish a ProviderOrderAcceptedEvent, which will allow the ConsumerOrders to mark the order as accepted and notify the consumer about it.

My personal preferred approach is option 2 as I can see several benefits (see below), but I'm not sure if they are worth the added complexity.

I can't formulate a specific question, but as this problem must have been solved thousands of times, I'd like to know if there is one preferred approach, well-known solution or reference design that can help me.

Benefits of separate ProviderOrders and ConsumerOrders bounded contexts:

  1. A single ConsumerOrder can generate multiple ProviderOrders (if the order contains products from multiple providers
  2. The workflow of a ProviderOrder might be different/more complex than the workflow of a ConsumerOrder.
  3. Both the consumer and the provider need to see their order history, which I envision as a denormalized table for fast reads, but both order histories contain different data (ie consumer orders contain provider information and provider orders contain consumer information) and are queried differently (by the consumer and by provider). This can be implemented in single table obviously, but it seems cleaner if they are 2 tables dedicated to a single purpose.
  4. Data isolation/partitioning. Consumer orders are always accessed by consumer Id, Provider Orders are always accessed by ProviderId.

I'm having a very interesting conversation about this topic on a separate forum, so I thought I should link it here, in case someone wants to read more thoughts on this topic. Same question on NServiceBus discussion board

Note: This is implemented in .NET, by multiple teams, from multiple repositories and Visual Studio Solutions, hosted in a Service Fabric cluster and using NServiceBus for messaging.

","238707","","-1","","2020-06-16 10:01:49","2018-02-27 14:32:37","Who owns the Orders in a consumer-provider marketplace like platform?","","3","9","3","","","CC BY-SA 3.0" "256777","1","256793","","2014-09-19 23:46:33","","4","1119","

The context

I'm modeling a database for a small ERP system. However I've recently hit a difficult spot that I'm having a hard time wrapping my head around. The logic of it involves a few special cases, I'm hoping someone with DB design background might help (this is my first large DB model project).

  1. Contact is a table holding information on various people.
  2. A contact has a organization_id field which is a foreign key to Organization, id
  3. We handle a case where if a contact has no organization (organization_id = null) it is a ""freelancer""...
  4. Organization is a table holding information on organizations. An organization is linked to many contacts.
  5. Invoice is a table holding invoice information.

The problem: Suppose a contact A has an invoice X and that contact changes organization (after the transaction). Who owns the invoice? (in other words, how do I link invoices to certain entities).

Possible solutions I have explored

  1. Link Invoice to Organization with a foreign key (organization_id) in table Invoice.

However, this does not handle the case where a Contact has no organization (is a freelancer). If such a contact has a sale/invoice... the system can't handle it.

  1. Link Invoice to Contact with a foreign key (contact_id) in table Invoice.

However, if a contact changes organization, that organization would inherit the contact's past invoices (which is WRONG).

  1. On the front-end, auto-generate an Organization based on a Contact's information when that contact is a ""freelancer"".

To be honest, I don't like this solution. It feels like a cheap hack.

  1. Force contacts to have an organization...

I'm hoping there is another solution than this one...

EDIT #1

After analyzing some the answers, I've realized an important piece of information is missing. The small ERP system will be used by many clients, some of which follow the B2B (Business-to-Business) model and others which follow the B2C (Business-to-Customer) model. In the B2C model, Contacts DON'T have an Organization. But they should still be able to have projects/sales associated to them.

","142282","","163990","","2016-05-26 22:53:26","2016-05-26 22:53:26","Database design, how to handle freelancers","","3","8","","","","CC BY-SA 3.0" "256886","2","","256864","2014-09-21 23:44:21","","21","","

From what I've seen, it really boils down to the whole ""Windows experience"". That is, making any action or option as visible to the user as possible.

The reason I say this is that a GUI is not necessary for installation. MSI-based installers can be silently installed in a similar fashion to Linux-based packages. The GUI is completely optional, but again is there to give the user a visual representation of what is going on in the background.

In Linux, this is easily accomplished by use of a package manager. If I want to install a package, I have to specifically request that package. For the less technically include, usually a GUI-based package manager is available for the user to install desired software.

In Windows, no such thing exists. If a user wants to install Windows-based software, they have to find and download the software separately. There is no standardized tool to assist the user in configuring and installing the software. Therefore the install GUI that comes bundled with each software is very much alike the package manager GUI in Linux. It simply exists to allow the user to configure the installation and track its progress.

There are plenty of cases where an install GUI is not necessary due to the presence of a management GUI. For example, the popular Steam platform will install any games or software available through the steam store automatically with the assistance of install scripts.

Another great example would be SCCM. System Center Configuration Manager (SCCM for short) is a software used to manage groups of computers on a network. It includes the ability to make software available for install through a GUI called Software Center. Any MSI-based installer can be made available to install at the click of a button. In the environment that I work in, we have software ranging from Adobe's Creative Suite to things such as WinZip available. All a user needs to do is search the catalog to find what's available, click install, and wait for confirmation. It is almost the exact same process as if I wanted to install something on my home computer running Linux Mint.

","132133","","","","","2014-09-21 23:44:21","","","","2","","","","CC BY-SA 3.0" "256891","2","","256890","2014-09-22 00:58:52","","6","","

A desktop application is not a web application.

A native mobile application is not a web application.

The fact that a desktop/mobile applications relies heavily on internet resource doesn't change anything. Nowadays, most applications do, since they need to empower their users with centralized online storage, off-site backups, the power of cloud computing, etc.

Note that it doesn't matter where the web application performs most of processing. There is a distinction to make between a website (something which displays mostly static content and provides only basic interactivity) and a web application, but a web application may run entirely on client side (though JavaScript), store all the data on client side as well, and use a server only to retrieve source files (HTML, CSS and JavaScript).

Are web applications really different from desktop ones?

The distinction between a desktop/mobile application on one hand and a web application on the other hand matters for the following reasons:

  • Desktop and native mobile applications have usually more permissions than web applications. Web applications are run in sandbox and have no or low permissions on client side. They can eventually store their data (given that even this can easily be overridden by the user) on client machine, but cannot access hardware and person's files in the same way desktop applications do.

    This applies as well to RIA (Flash and Silverlight), which have a limited set of privileges compared to a desktop application.

  • Desktop and native mobile applications can benefit more in terms of performance for being close to the hardware. This tend to be less true recently, especially given the advance in terms of performance of V8 JavaScript engine, but still true for anything computationally intensive or something which pushes the hardware to its limits. If you're not convinced, imagine Crysis 3 rewritten in JavaScript and fed trough a browser.

  • Desktop and native mobile applications can provide much more in terms of user experience (UX). Even with HTML 5 and CSS 3, web applications are very, very far from the capabilities, in terms of UX, of desktop applications. Shortcuts are a good example: if you're not convinced, try to respond to shortcuts such as Ctrl+Shift+N in your web application and don't forget to test how well it works in Chrome.

  • Some companies make a difference between web programmers and other programmers when hiring people. Web development requires slightly different knowledge compared to the one needed to develop applications for PCs or mobile devices.

    On the other hand, the gap tends to shrink considerably with the efforts of companies such as Microsoft to bring a common set of tools and paradigms for both worlds. For instance, writing an application in WPF is not very different from writing one for Silverlight.

  • The way of distribution is different. Usually, a desktop application is downloaded and installed by the user, and the user has to make an additional effort in order to get a more recent version of the application. The effort can be as large as a new purchase (for example buying Microsoft Office 2013 to replace Microsoft Office 2007) or as small as a mouse click (for example installing a security update for Microsoft Office 2007). On the other hand, the user has usually no control over the updating and upgrading of a web application; those are often done silently.

    Note that recently, more and more products tend to follow the model of web applications. Google Chrome is an excellent example: updates are done automatically (like Firefox) and in a non-intrusive way (unlike Firefox). Pay-per-month subscriptions also contribute to this movement for paid applications.

","6605","","6605","","2014-09-22 01:27:07","2014-09-22 01:27:07","","","","0","","","","CC BY-SA 3.0" "63935","2","","63859","2011-03-31 12:44:13","","252","","

Note that I'm no longer updating this answer. I have a much longer Python 3 Q & A on my personal site at http://python-notes.curiousefficiency.org/en/latest/python3/questions_and_answers.html

Previous answer:

(Status update, September 2012)

We (i.e. the Python core developers) predicted when Python 3.0 was released that it would take about 5 years for 3.x to become the ""default"" choice for new projects over the 2.x series. That prediction is why the planned maintenance period for the 2.7 release is so long.

The original Python 3.0 release also turned out to have some critical issues with poor IO performance that made it effectively unusable for most practical purposes, so it makes more sense to start the timeline from the release of Python 3.1 in late June, 2009. (Those IO performance problems are also the reason why there are no 3.0.z maintenance releases: there's no good reason anyone would want to stick with 3.0 over upgrading to 3.1).

At time of writing (September 2012), that means we're currently a bit over 3 years into the transition process, and that prediction still seems to be on track.

While people typing Python 3 code are most regularly bitten by syntactic changes like print becoming a function, that actually isn't a hassle for library porting because the automated 2to3 conversion tool handles it quite happily.

The biggest problem in practice is actually a semantic one: Python 3 doesn't let you play fast and loose with text encodings the way Python 2 does. This is both its greatest benefit over Python 2, but also the greatest barrier to porting: you have to fix your Unicode handling issues to get a port to work correctly (whereas in 2.x, a lot of that code silently produced incorrect data with non-ASCII inputs, giving the impression of working, especially in environments where non-ASCII data is uncommon).

Even the standard library in Python 3.0 and 3.1 still had Unicode handling issues, making it difficult to port a lot of libraries (especially those related to web services).

3.2 addressed a lot of those problems, providing a much better target for libraries and frameworks like Django. 3.2 also brought the first working version of wsgiref (the main standard used for communication between web servers and web applications written in Python) for 3.x, which was a necessary prerequisite for adoption in the web space.

Key dependencies like NumPy and SciPy have now been ported, installation and dependency management tools like zc.buildout, pip and virtualenv are available for 3.x, the Pyramid 1.3 release supports Python 3.2, the upcoming Django 1.5 release includes experimental Python 3 support, and the 12.0 release of the Twisted networking framework dropped support of Python 2.5 in order to pave the way for creating a Python 3 compatible version.

In addition to progress on Python 3 compatibility libraries and frameworks, the popular JIT-compiled PyPy interpreter implementation is actively working on Python 3 support.

Tools for managing the migration process have also improved markedly. In addition to the 2to3 tool provided as part of CPython (which is now considered best suited for one-time conversions of applications which don't need to maintain support for the 2.x series), there is also python-modernize, which uses the 2to3 infrastructure to target the (large) common subset of Python 2 and Python 3. This tool creates a single code base that will run on both Python 2.6+ and Python 3.2+ with the aid of the six compatibility library. The Python 3.3 release also eliminates one major cause of ""noise"" when migrating existing Unicode aware applications: Python 3.3 once again supports the 'u' prefix for string literals (it doesn't actually do anything in Python 3 - it's just been restored to avoid inadvertently making migrating to Python 3 harder for users that had already correctly distinguished their text and binary literals in Python 2).

So we're actually pretty happy with how things are progressing - there are still nearly 2 years to go on our original time frame, and the changes are rippling out nicely through the whole Python ecosystem.

Since a lot of projects don't curate their Python Package Index metadata properly, and some projects with less active maintainers have been forked to add Python 3 support, purely automated PyPI scanners still give an overly negative view of the state of the Python 3 library support.

A preferred alternative for obtaining information on the level of Python 3 support on PyPI is http://py3ksupport.appspot.com/

This list is personally curated by Brett Cannon (a long-time Python core developer) to account for incorrect project metadata, Python 3 support which is in source control tools but not yet in an official release, and projects which have more up to date forks or alternatives which support Python 3. In many cases, the libraries that are not yet available on Python 3 are missing key dependencies and/or the lack of Python 3 support in other projects lessens user demand (e.g. once the core Django framework is available on Python 3, related tools like South and django-celery are more likely to add Python 3 support, and the availability of Python 3 support in both Pyramid and Django makes it more likely that Python 3 support will be implemented in other tools like gevent).

The site at http://getpython3.com/ includes some excellent links to books and other resources for Python 3, identifies some key libraries and frameworks that already support Python 3, and also provides some information on how developers can seek financial assistance from the PSF in porting key projects to Python 3.

Another good resource is the community wiki page on factors to consider when choosing a Python version for a new project: http://wiki.python.org/moin/Python2orPython3

","18670","","18670","","2014-01-11 10:58:25","2014-01-11 10:58:25","","","","8","","","","CC BY-SA 3.0" "64398","2","","64388","2011-04-01 17:24:39","","6","","

Of course it's true. Fit is more important than most things. For instance if you have a team that works together closely and you hire a lone wolf, you can expect arguments, code that doesn't fit the design and does things in a way that the others don't like and no intention to fix it, lack of willingness to help others out when they need it, refusal to commit stuff to source control, rewriting code for no good reason except he didn't want it that way, etc.

If you have a group of friendly people who hang out together and you hire someone who wants to work in total silence, you will have friction. You may also have harrassment of the person who doesn't fit in.

If you have a group of people who want freedom to do things the way they want and you hire a process-oriented person, there will be continual arguments especially if the process-oriented person is the lead.

If you hire someone whose experience and background make them more of a beginner in a team where everyone is expected to be senior, you will have annoyance as no one wants to mentor the guy or help him learn as they expect he should be able to do it without help.

If you hire someone who expects special priviledges the other employees don't have - you can expect constant warfare. The feelings is ""if Sally is so good she gets to work from home when I don't, then why should I help her out?"" There is resentment when someone comes into a team expecting things (and getting them) that the others don't get especially when they haven't accomplished anything yet. Or there is resentment from the new employee side if they expect things and don't get them when everyone else is fine with the way things are. Then the unhappy employee will waste everyone's time complaining and dragging around like he is being tortured because he has to come in before 1 pm.

","1093","","1093","","2014-04-02 13:01:37","2014-04-02 13:01:37","","","","0","","","","CC BY-SA 3.0" "151906","2","","150356","2012-06-07 11:01:04","","65","","

Your Own Site

Build your OWN site to distribute your software. It needs to have a home. This can be the code hosting repository where you host it and its development, but you could have a more customer facing site, and have them link to each other.

Your own site comes with additional elements:

  • your own chatroom(s),
  • your own newsgroup(s),
  • your own mailing list(s),
  • your own social network business page(s),
  • feeds (RSS/Atom) for your update channels (and some of previous points).

Notice that you can have several ones for different purposes: to talk to developers, make announcement, take care of customer support...

One point though: it's better to have one active point of communication than to get dispersed and have no content and no activity at all. It's the chicken and egg thing, but people are less enclined to ask questions on an empty forum. It's understandable to want to reach out to as many users as you'd like (we all prefer one medium to another), but wait a bit before you set up that Gopher site and an IRC channel.

Search Engines

Search Engines are the key element here: that's what everybody uses to find you. In the good ol' days (actually, the dark ages, really :)), you used to have search engines that were actually mostly keyword-based directories, and you had to submit your site to them individually/manually, or using so-called ""search-engine auto-submitters"". Some were relatively good, some would get you blacklisted easily.

Nowadays, I'd recommend you do 3 things:

Surprisingly, even Google still has pages to let your ""submit"" a site for inclusion, but usually that won't be needed. Feel free to also look for other directories and less known search engines to check for your inclusion in their databases. It's a good thing to regularly check where you are.

Software Distribution Sites

As mentioned by stmax in comments, the easiest way to start promoting an app that targets known mobile devices would usually be to use their dedicated app stores. It's rather quick and easy.

Depending on your platform of choice, and whether you want to sell your app or not (and if it supports in-app payments or not), you may want to to look at package management systems. This somewhat similar to software distribution sites (in that they aggregate software distribution in one place and) and app stores (in that they allow one-click install), but usually you only use them directly from you system (and not from the web). A famous example is the debian packaging format, and its mainy repositories and front-ends (which includes the Ubuntu Software Center, for instance).

Social Networks

You can use social aggregators to make things easier to deal with, or at least to make it easier for your users to then enhance your popularity on several networks, for instance with ShareThis or AddThis.

Communicate Actively

This can take some time, but not this much if you're efficient and have things well prepared.

  • communicate on forums, chat rooms, newsgroups...

    • DO NOT be spammy,
    • DO answers that relate to your software, give full disclosure in a proper way, and kindly point people to your software when they request alternatives or solutions.
  • broadcast updates and news to your different communication streams above, tweet about them, tell your friends on FB, publish an announcement on appropriate mailing-lists:

    • when you publish a minor revision,
    • when you have a potential project or feature in mind and need feedback,
    • when you reach a milestone (# of downloads, # of users...),
    • anything, really.

Of course, broadcast these to your communication channels described above.

Write Support Material

  • Write user and developgment guides accordingly.
  • Publish video tutorials or demonstrations (create a Youtube and/or Vimeo channel).
  • Write tutorials on how to use your software.
  • Publish a (tentative) roadmap for future features.

Get Reviewed

  • Friends can review you on their blogs and social network pages.
  • Users can review you and you can facilitate that by adding ""talk about MY_PROJECT on SOCIAL_NETWORK"" link.
  • Professionals (bloggers, writers, developers...) can review your app, for free or for a compensation (this is a possibly spammy route, beware to contact the right people).
    • Contact newspapers and technical magazines, online and offline (print is NOT dead). Some might want to write an article on you, some will just write a small column, some won't but will remember your name and product later, and some might just talk about your product to some friends at the bar.

Engage your Users

  • Request feedback, and permission to publish it, via:
  • Listen to feature requests.
  • Request your users' help in promoting your software.
  • Request your users' help in identifying flaws and troubleshooting in your software.

Personally, I'm not a fan of the user feedback sites like GetSatisfaction and UserVoice. They tend to slow down your site or web-app, you need to rely on them and if they break they may break parts of your site, and are generally more prone to downtimes than a good old mailing system. So I prefer a mailing-list/newsgroup, maybe with a web-interface as well (like a Google Group), and a simple contact form for the basic user. An issue- and/or bug-tracker is good to have for more advanced users (use one hosted on Google Code Project Hosting, BitBucket, GitHub, Sourceforge, Assembla... depending on your licensing terms, of course) and to let them know about the progress of a feature request and vote for the most requested features or bugfixes).

Advertize

All of the above is advertisement, really, but obviously some more professional advertising can help. And even a 75USD AdWords voucher can go a long way, if you play it right.

You can go further and contact some services that manufacture and sell promotional items for you (mugs, t-shirts, caps, ...). This seems a bit nutty, but some users are happy to have some, and this does sometimes help to reach out to new users. Just make sure to pick the right services, where you won't need to pay much, or anything (some just take a commission on sales of articles).

Stay Up to Date

Publish updates often and communicate about them. Before you know it people will follow suit. Publuish beta-testing versions of upcoming releases, for advanced users only.

Also keep up with competitors and eventually review and compare them. DO NOT be derogatory or pejorative, be fair, do not twist numbers, and point our where you fare better. We don't expect you to to point ou your flaws, but to state what's the small ""plus"" you have over them.


Zero Budget, 30 minutes

All of this looks like a lot of time and even like it involves some money. But you can do most of it for no cost at all, or very low cost.

If you register for AdWords / AdSense / Google Webmaster Tools, you might eventually get a free voucher, or some friends might have one to spare. Technically this is money, but you didn't actually pay it, you're not down anything.

You can find free hosting services (even Blogger would do) for simple sites with (originally) low to medium traffic, and domain names can be found for very cheap value per year.

And all the communication, while it can be expensive in terms of time, gets better over time:

  • Write out templates for your release and update announcements for your mailing-list, your tweets, etc..
  • Make sure to program said updates to be broadcasted automatically to your different commication channels. Automate this as much as possible. It will be worth the time saved over the longer run.
  • Giving a little of your time every day or every week amounts to a lot in the end, and it's generating constant noise that matters to keep conversations going. And your friends and die-hard fans can help with this as well.

It's important to remember that every single new visitor and new recommendation counts. Whether it's someone publishing a full-page article about you, or just a friend sending a link to your app to another friend or talking about your product over a drink in a bar.

Learn

Put these 30 minutes a day to good use by learning the tools of the trade and the techniques of SEO experts, marketers and advertisers. They are, in the end, valuable skills and knowledge to have.

I remember omeone saying on another StackExchange site you should set apart 5 years of your life to learn them. Though I'd say it really doesn't take this long, there's obviously a lot to learn and various levels of expertise to obtain, but you can learn a great deal.

I'm sure as a developer you'll be happy to learn the more technical bits (like how to create pages that are SEO-friendly), relatively less happy to learn less technical bits (how to produce user-friendly page layouts, based on actual and tested HCI concepts and marketing research, not just programmer's instincts), and a lot less happy to learn the ""annoying"" bits that relate to marketing and advertising (picking keyword lists, writing good announcements, etc...). The motivator, for me, is to always view it as something technical, in the end: what you want is optimize the visibility, and all this because purely a game of numbers. Learning to write and design decently is just a mean to get these numbers up. Plus I find it interesting to learn UI and UX concepts, for which ""lambda"" users often have very different expectations than the programmers of an application (hence the need to request a lot of user feedback, and to listen to it).

Stand on the Shoulders of Giants... Be a Copy-Cat

You're not the first person to try to promote a product. Pick a famous product, and look how they did it. How do you get access to this product when you start from 0? Ideally, you want to be able to allow users to do the same with yours. That's what you aim for. Maybe look at some influential commercial or free software project, and look how they created a community, how they communicate around their product. You can try to find innovative ways of promoting yourself (and it's usually good to innovate, to stand out of the crowd), but the good old and tested ways work well, obviously.

Measure, Measure, Measure

I said two things I need to repeat here:

  • Listen to your users;
  • It's all about data, not about what you think you know as a programmer.

You can't improve things if you don't know what doesn't work or what is a better alternative. Learn (see above ;)) to use analytics systems (like Google Analytics) to track basic stats about your visitors (population demographics, origins, platforms ...) and more advanced reports (conversion rates, funnels...). Use such tools to measure the impact of changes you make to your site, and get real hard data to be able to know whether a change is beneficial or not.

I've done personal mistakes like this at first, believing my vision was better, and I've had (and still have...) to deal with startup founders who always start 83% of their sentences with ""I think that..."". No you don't. If you really ""thought"", you wouldn't say that. You assumed, and that's a bad habit. Usually, when someone says ""I think"", I now follow up with ""prove it"", or if I can't and don't believe their claim, I will go do my own hallway testing to prove or disprove their assumption.

A/B testing just works.

Of course, all this also takes time. I'm giving you the tools here, but just do with what you can with your own constraints. You don't need to A/B test every single scenario, and you don't need to re-evaluate every week every single little thing you do. But the more you do it, the better.


All of this meant to consolidate the prevalence of your software's own distribution site.

Your goal is to promote it, and to then allow users to find all the necesasry and relevant information on your site, and to minimize the path to a download.

","3631","","3631","","2012-06-11 16:48:23","2012-06-11 16:48:23","","","","8","","","2012-06-11 16:48:23","CC BY-SA 3.0" "152098","2","","152094","2012-06-08 18:52:16","","28","","

You wouldn't use a Null Object Pattern in places where null (or Null Object) is returned because there was a catastrophic failure. In those places I would continue to return null. In some cases, if there's no recovery, you might as well crash because at least the crash dump will indicate exactly where the problem occurred. In such cases when you add your own error handling, you are still going to kill the process (again, I said for cases where there's no recovery) but your error handling will mask very important information that a crash dump would've provided.

Null Object Pattern is more for places where there's a default behavior that could be taken in a case where object isn't found. For example consider the following:

User* pUser = GetUser( ""Bob"" );

if( pUser )
{
    pUser->SetAddress( ""123 Fake St."" );
}

If you use NOP, you would write:

GetUser( ""Bob"" )->SetAddress( ""123 Fake St."" );

Note that this code's behavior is ""if Bob exists, I want to update his address"". Obviously if your application requires Bob to be present, you don't want to silently succeed. But there are cases where this type of behavior would be appropriate. And in those cases, doesn't NOP produce a much cleaner and concise code?

In places where you really can't live without Bob, I would have GetUser() throw an application exception (i.e. not access violation or anything like that) that would be handled at a higher level and would report general operation failure. In this case, there's no need for NOP but there's also no need to explicitly check for NULL. IMO, those checks for NULL, only make the code bigger and take away from readability. Check for NULL is still the right design choice for some interfaces, but not nearly as many as some people tend to think.

","20673","","","","","2012-06-08 18:52:16","","","","13","","","","CC BY-SA 3.0" "152533","1","152536","","2012-06-12 13:20:24","","27","2700","

I was under the impression that by now everyone agrees this maxim was a mistake. But I recently saw this answer which has a ""be lenient"" comment upvoted 137 times (as of today).

In my opinion, the leniency in what browsers accept was the direct cause of the utter mess that HTML and some other web standards were a few years ago, and have only recently begun to properly crystallize out of that mess. The way I see it, being lenient in what you accept will lead to this.

The second part of the maxim is ""discard faulty input silently, without returning an error message unless this is required by the specification"", and this feels borderline offensive. Any programmer who has banged their head on the wall when something fails silently will know what I mean.

So, am I completely wrong about this? Should my program be lenient in what it accepts and swallow errors silently? Or am I mis-interpreting what this is supposed to mean?


The original question said ""program"", and I take everyone's point about that. It can make sense for programs to be lenient. What I really meant, however, is APIs: interfaces exposed to other programs, rather than people. HTTP is an example. The protocol is an interface that only other programs use. People never directly provide the dates that go into headers like ""If-Modified-Since"".

So, the question is: should the server implementing a standard be lenient and allow dates in several other formats, in addition to the one that's actually required by the standard? I believe the ""be lenient"" is supposed to apply to this situation, rather than human interfaces.

If the server is lenient, it might seem like an overall improvement, but I think in practice it only leads to client implementations that end up relying on the leniency and thus failing to work with another server that's lenient in slightly different ways.

So, should a server exposing some API be lenient or is that a very bad idea?


Now onto lenient handling of user input. Consider YouTrack (a bug tracking software). It uses a language for text entry that is reminiscent of Markdown. Except that it's ""lenient"". For example, writing

- foo
- bar
- baz

is not a documented way of creating a bulleted list, and yet it worked. Consequently, it ended up being used a lot throughout our internal bugtracker. Next version comes out, and this lenient feature starts working slightly differently, breaking a bunch of lists that (mis)used this (non)feature. The documented way to create bulleted lists still works, of course.

So, should my software be lenient in what user inputs it accepts?

","3278","romkyns","-1","","2017-05-23 11:33:35","2012-07-11 00:05:23","Should a server ""be lenient"" in what it accepts and ""discard faulty input silently""?","","11","9","","","","CC BY-SA 3.0" "152554","2","","152533","2012-06-12 15:07:03","","3","","

I think this is well covered in chapter 1, section 6 of TAOUP. Specifically, the rule of repair, which states that a program should do what it can with an input, pass correct data forward, and if the correct response is failure then do so ASAP.

A similar concept is defensive programming. You don't know what kind of input you will receive, but your program should be robust enough to cover all cases. This means there should be programmed in recovery cases for known problems like mangled input, and a catch all case to handle unknowns.

So discarding faulty input silently is fine, so long as you are handling that input. You should never just drop it on the floor, as it were.


For an API, I think being lenient is the same as for a program. The input is still wrong, but you are attempting to repair as much as possible. The difference is what is considered valid repair. As you point out, a lenient API can cause problems as people use ""features"" that don't exist.

Of course, an API is just a lower level version of the rule of composition. As such, it is really covered under the rule of least surprise, since it is an interface.

As the quote from Spencer notes, avoid superficial similarity, which can be argued about ""fuzzy"" inputs. Under these conditions, I'd normally argue that everything points to the program being unable to repair, because it won't know what is desired, and it is least surprising for the userbase.

However, you are dealing with dates which have many ""standards"". Sometimes these even get mixed in a single program(chain). Since you know that a date is expected, attempting to recognize the date is just good design. Especially if the date comes from some external program and gets passed unmodified through a second on it's way to you.

","27114","","27114","","2012-06-13 16:03:09","2012-06-13 16:03:09","","","","0","","","","CC BY-SA 3.0" "367382","2","","367253","2018-03-09 13:52:36","","4","","

You should have a versioning strategy because that is key to independent evolvability, but it should be tied to Content-Type, not URLs or anything else.

Even in a closed-house setting, you should still strive to make all components of a system independently evolvable (isolated, modularised) — especially a distributed system like a client/server-based one. This both allows different teams to work on each at their own pace, and allows for different release cadences.

Why not in URLs?

URIs identify abstract things which cannot be versioned, like a user ""Andy"", or an invoice. A representation of that thing will have a particular serialisation, which can be versioned, application/andys-api-v1+json.

Your API (as with any website) is defined by three things. These are the only things that you need to document if your API is RESTful:

  • The root URL
  • The content type(s) of representations
  • the link relations between URIs

If a v1 client obtains a link to /users/andy from a previous request, it can forward that to a v2 client, which can then make a request to the same URL to get data about the same Thing, but in a language (content-type) it can speak, application/andys-api-v2+json.

The v1 and v2 clients might be different parts of the same program, in the midst of a development cycle. The key is that the clients both continue working throughout.

","28660","","","","","2018-03-09 13:52:36","","","","3","","","","CC BY-SA 3.0" "258465","2","","258453","2014-10-08 15:55:22","","4","","

In the world of commercial software, release schedules are tied to:

  1. Development timeline (How long does it take to create each release or patch?)
  2. Testing and QA timeline (How long does it take to test, qualify, and certify the app as running properly on all the platforms and in all the modes in which it will typically be used; this often includes ""integration,"" ""stress,"" and ""acceptance"" testing, not just unit tests)
  3. Sales and marketing cycles (How long does it take to create demand for, or acceptance of, the new release?)
  4. Customer update cycles (How often do customers want new releases? And at what point in their business cycles can they accept new releases/features? Retailers, e.g., lock down all non-urgent upgrades throughout the entire multi-month ""Christmas season"")
  5. Training and support integration (How long does it take to document new features/fixes, train both internal and end-users on them, and get your support team up to speed on the changes?)

Traditionally releases are a Big Deal, for both the software developer and the customers. So release cycles of 1-3 years between ""major"" releases have been common, with ""minor"" or ""dot"" releases every 3-6 months, and emergency patches on an as-needed basis.

Cloud and SaaS (software as a service) shops are the opposite extreme, they often ""slipstream"" updates without ever telling anyone (maybe their support staff, but not always even then). I know shops that do updates once a week, on a fixed day. Others update as often as several times daily.

You're a small shop with an internal app, so there's no real sales/marketing cycle. There doesn't seem to be any official support or testing/QA function between users and development. Your development cycles seem short, and your deployments easy. So you could iterate as fast as your user community will let you.

Having been in this situation, some suggestions:

  1. Just because you don't have an official testing/QA organization doesn't mean you should avoid testing. Please, please, please have an automated testing suite that you run before every release. This can save you a world of hurt later.
  2. Just because you can silently ""slipstream"" new functions or patches doesn't mean you should. In fact, you should not. Have a real version or release number on EVERY release. This will make tracking down bugs easier. (See e.g. semantic versioning for insight on a structured way to assign release numbers.)
  3. Have an app web page or ""about this app"" screen that shows recent version updates. Showing users what is ""new and notable"" is one of the things fast-updated open source projects have learned helps increase user trust and comfort with the update process.
","55314","","","","","2014-10-08 15:55:22","","","","0","","","","CC BY-SA 3.0" "367762","2","","367744","2018-03-16 13:33:31","","2","","

My impression from people within the industry I have spoken with is that there is a fairly simple process, similar to business cases.

  1. Start with a format choice. 2d platform, First Person Shooter, Card game, Point and click adventure etc

  2. Add your twist or twists. You can rewind time, perma-death, crafting, multiplayer, etc

These two give you the basis of your engine choice. Unity, Unreal etc

  1. Characters, Locations and Roughed out plot: You are a rich adventuress trying to find a mysterious artefact in a series of 'tombs'

This gives you a list of assets to make,

  • get 10 concept sketches of 'lara'
  • make best one in 3d
  • add animations 1 through 10
  • record voice acting for scene 12b

and functionality to program

  • add inventory screen
  • add weapons that shoot
  • implement damage from monsters

Because your engine is virtually chosen for you by the type of game you want to make and the skills of your dev team, implementing each task is a fairly well known process within that engine. And can be simply defined just like most business specs eg:

  • When I click on 'Buy' I should go to the 'payment' screen

  • When I click 'fire' the gun i'm holding should fire

The tricky bit is when you go off piste with some new game mechanic which hasn't been done before.

  • there will be 100 people all on the same server
  • when I look through the portal I will see through the exit portal
  • when I shoot the wall holes will appear
","177980","","","","","2018-03-16 13:33:31","","","","0","","","","CC BY-SA 3.0" "153844","2","","153816","2012-06-22 08:50:30","","2","","

Can't really talk for everyone but here is what I can say.

I haven't work 30+ years in the domain but I saw enough to say a few things. A project has a lifetime pretty much like a human. Initial design may not fit the current needs for let say one project after 20 years of development. That said in that amount of time, a lot of people changed the code messed with it and added things that weren't supposed to be possible at first.

It's not really difficult to imagine ugly code on legacy projects or fairly old projects. We can't expect everyone to fully understand the initial designs. It's sad but that's the way it is.

That said, you have to keep in mind that refactoring a legacy project is not always possible and sometime not even desired. I worked in a company where they were developing the replacement for the project I was working on. I wasn't allowed to refactor too much my project in fear that it would work better than the new project. I'm pretty sure there is no way this project could ever work better than a new fresh one. the phrase was a bit like ""Don't make it better, just make it work"".

Eventually you won't have that kind of project often, as I often read and listen. You should try to find work with startups instead of big corporation. Startups are quite interesting and you can eventually move on fast if you see that it's not going the way you want it too.

Also one thing you can do, I really don't promise anything but if you feel the code is really that bad and need refactoring. Share it with the team. Keep in mind that the people who wrote that ugly code might be working with you. It's not about hurting people's feeling but if you see that the project you're working on will collapse after sometime and people will spend more time understanding what it does instead of improving it. It's better to speak up and communicate the problem than keep it for yourself. If you're lucky enough you might end up refactoring the project.

If you end up refactoring the project, you might end up being the person pointed at for bad design choices! And then you might understand why refactoring doesn't happen that often. Hopefully if the whole team has to refactor, then nobody get pointed at. They'll just fire everyone =)

","12039","","","","","2012-06-22 08:50:30","","","","0","","","2012-06-22 19:47:04","CC BY-SA 3.0" "260752","1","","","2014-10-23 16:01:08","","4","13206","

There are times when an enum's values are important: it is not necessary for them to be unique, they also need to have specific values. In such cases, should the values be explicitly defined, even if they coincide with the defaults? An example:

enum Car {
    DeLorean = 0,
    Lada = 1
};

Imagine that for whatever reason your application assumes that DeLorean and Lada have those specific values. Incidentally, they are the same as the default values but does that mean it is no longer necessary to use explicit definitions?

Leaving them implicit makes me uneasy. It seams to me that having them explicitly defined is communicating to future programmers that the specific values are important and it helps prevent mistakes like this:

enum Car {
    Dacia,
    DeLorean,
    Lada
};

In the example above, another programmer who is not aware of the restriction I mentioned introduces a new enum value and, wanting to keep the code tidy, puts it in alphabetical order. However, DeLorean and Lada now have different numerical values and a potentially silent bug has been introduced.

My reasoning seems correct to me (obviously), but the code review team for a company I used to work with didn't agree. Am I wrong?

","27083","","27083","","2014-10-23 17:13:04","2014-10-26 00:22:51","Explicitly define enum values, even if the default value is the same?","","6","8","","","","CC BY-SA 3.0" "368658","1","","","2018-04-01 06:22:53","","4","1769","

How do you cope with a team who tends to underestimate time needed to complete tasks and haven't been improving the accuracy of their estimates?

Details: I work in a scrum team (7 engineers) in a FANG company, at the end of every sprint, we vote to estimate how many hours we need to spend on each user story for next sprint. Then we assign these stories to each one of us according to our available capacity.

I've been here for a year and we have a very persistent problem: almost everybody can't finish planned work in almost every sprint. We have huge carryovers in every sprint.

I tend to vote for larger estimates, but my teammates almost never learn from their past mistakes and persistently vote low in these estimations.

I'm the kind of person who just want work for 40 hours/week, chill and avoid burnout. I believe in 'underestimate and over-deliver'. I know some of my teammates work long hours all the time. Our scrum master works extra hours almost everyday yet she still vote very low all the time. She's been around for quite a while so we respect her opinions.

They might each have different incentives, like to impress the management or conform with the others? Maybe they want fast promotion? I don't know and I don't care. I try to cope with it by taking a strong lead in my own project and voice my concerns in planning meetings. But sometimes I get assigned to user stories that are estimated by the team. And they usually have ridiculous expectations, like launching a new small production service from scratch in a week. Remember it's a big company which has a lot of internal processes, and a teammate have told me it takes at least 3 weeks to launch a bare born service. I was on vacation when this estimation happened.

Also I would look bad if I have big carryover points too often.

My manager is kind of a people pleaser and tend to accept ridiculous deadlines from other team or upper management. Thankfully my manager listens to me.

Sorry for the long rant. I actually like my team and manager, so I don't want leave. I know we are doing agile all wrong but they don't change, and don't seem to care about working long hours.

","301665","","301665","","2018-04-01 06:28:55","2018-04-02 08:32:13","How do you cope with a team who tends to underestimate time needed to complete tasks?","","6","14","4","","","CC BY-SA 3.0" "67937","2","","67923","2011-04-13 19:56:39","","5","","

Your clients, the business people, may have some sort of problem, desire some sort of technical solution, but have little idea of how the solution might work, and thus have little idea of how to spec any potential solution. If so, the role missing is that of a business solutions analysts, who can study the customer, their problems, their workflows, etc., and how any possible solutions might fit their corporate procedures, culture, etc., as well as whether any particular solution might be feasible to implement in time, under budget, etc. This may be a highly interdisciplinary role, requiring some knowledge of both business practices (law, accounting, logistics, etc.), user psychology, as well as software technology.

It sounds like you want to force the customer to be their own business solutions analyst. This may not be a role they have enough expertise in to ensure a reasonable spec. And it sounds like you don't want to take this role either. If neither you, nor your customer has the expertise to fill this role, you may not have all the people needed for a successful project.

Sometimes a bunch of rapid prototypes that the customer can play with might be the only way to experimentally discover and converge on some sort of usable solution for the customer's (voiced and unvoiced) needs. This may or may not be suitable for any kind of non-open-ended contract.

ADDED: If you try to force a requirements document out of customers who don't have the requisite expertise, this could potentially be a huge red flag indicating an oncoming disaster.

","9742","","9742","","2011-04-13 20:09:03","2011-04-13 20:09:03","","","","0","","","","CC BY-SA 3.0" "68007","2","","67923","2011-04-14 04:03:19","","26","","

I have spent the last 3 months in an exhaustive - and exhausting - requirements-gathering phase of a major project and have learned, above all else, that there is no one-size-fits-all solution. There is no process, no secret, that will work in every case. Requirements analysis is a genuine skill, and just when you think you've finally figured it all out, you get exposed to a totally different group of people and have to throw everything you know out the window.

Several lessons that I've learned:

  • Different stakeholders think at different levels of abstraction.

    It is easy to say ""talk at a business level, not technical"", but it's not necessarily that easy to do. The system you're designing is an elephant and your stakeholders are the blind men examining it. Some people are so deeply immersed in process and routine that they don't even realize that there is a business. Others may work at the level of abstraction you want but be prone to making exaggerated or even false claims, or engage in wishful thinking.

    Unfortunately, you simply have to get to know all of the individuals as individuals and understand how they think, learn how to interpret the things they say, and even decide what to ignore.

  • Divide and Conquer

    If you don't want something done, send it to a committee.

    Don't meet with committees. Keep those meetings as small as possible. YMMV, but in my experience, the ideal size is 3-4 people (including yourself) for open sessions and 2-3 people for closed sessions (i.e. when you need a specific question answered).

    I try to meet with people who have similar functions in the business. There's really very little to gain and very much to lose from tossing the marketing folks in the room with the bean counters. Seek out the people who are experts on one subject and get them to talk about that subject.

  • A meeting without preparation is a meeting without purpose.

    A couple of other answers/comments have made reference to the straw-man technique, which is an excellent one for those troublesome folks that you just can't seem to get any answers out of. But don't rely on straw-men too much, or else people will start to feel like you're railroading them. You have to gently nudge people in the right direction and let them come up with the specifics themselves, so that they feel like they own them (and in a sense, they do own them).

    What you do need to have is some kind of mental model of how you think the business works, and how the system should work. You need to become a domain expert, even if you aren't an expert on the specific company in question. Do as much research as you can on your business, their competitors, existing systems on the market, and anything else that might even be remotely related.

    Once at that point, I've found it most effective to work with high-level constructs, such as Use Cases, which tend to be agreeable to everybody, but it's still critical to ask specific questions. If you start off with ""How do you bill your customers?"", you're in for a very long meeting. Ask questions that imply a process instead of belting out the process at the get-go: What are the line items? How are they calculated? How often do they change? How many different kinds of sales or contracts are there? Where do they get printed? You get the idea.

    If you miss a step, somebody will usually tell you. If nobody complains, then give yourself a pat on the back, because you've just implicitly confirmed the process.

  • Defer off-topic discussions.

    As a requirements analyst you're also playing the role of facilitator, and unless you really enjoy spending all your time in meetings, you need to find a way to keep things on track. Ironically, this issue becomes most pernicious when you finally do get people talking. If you're not careful, it can derail the train that you spent so much time laying the tracks for.

    However - and I learned this the hard way a long time ago - you can't just tell people that an issue is irrelevant. It's obviously relevant to them, otherwise they wouldn't be talking about it. Your job is to get people saying ""yes"" as much as possible and putting up a barrier like that just knocks you into ""no"" territory.

    This is a delicate balance that many people are able to maintain with ""action items"" - basically a generic queue of discussions that you've promised to come back to sometime, normally tagged with the names of those stakeholders who thought it was really important. This isn't just for diplomacy's sake - it's also a valuable tool for helping you remember what went on during the meetings, and who to talk to if you need clarification later on.

    Different analysts handle this in different ways; some like the very public whiteboard or flip-chart log, others silently tap it into their laptops and gently segue into other topics. Whatever you feel comfortable with.

  • You need an agenda

    This is probably true for almost any kind of meeting but it's doubly true for requirements meetings. As the discussions drag on, people's minds start to wander off and they start wondering when you're going to get to the things they really care about. Having an agenda provides some structure and also helps you to determine, as mentioned above, when you need to defer a discussion that's getting off-topic.

    Don't walk in there without a clear idea of exactly what it is that you want to cover and when. Without that, you have no way to evaluate your own progress, and the users will hate you for always running long (assuming they don't already hate you for other reasons).

  • Mock It

    If you use PowerPoint or Visio as a mock-up tool, you're going to suffer from the issue of it looking too polished. It's almost an uncanny valley of user interfaces; people will feel comfortable with napkin drawings (or computer-generated drawings that look like napkin drawings, using a tool like Balsamiq or Sketchflow), because they know it's not the real thing - same reason people are able to watch cartoon characters. But the more it starts to look like a real UI, the more people will want to pick and paw at it, and the more time they'll spend arguing about details that are ultimately insignificant.

    So definitely do mock ups to test your understanding of the requirements (after the initial analysis stages) - they're a great way to get very quick and detailed feedback - but keep them lo-fi and don't rush into mocking until you're pretty sure that you're seeing eye-to-eye with your users.

    Keep in mind that a mock up is not a deliverable, it is a tool to aid in understanding. Just as you would not expect to be held captive to your mock when doing the UI design, you can't assume that the design is OK simply because they gave your mock-up the thumbs-up. I've seen mocks used as a crutch, or worse, an excuse to bypass the requirements entirely; make sure you're not doing that. Go back and turn that mock into a real set of requirements.

  • Be patient.

    This is hard for a lot of programmers to believe, but for most non-trivial projects, you can't just sit down one time and hammer out a complete functional spec. I'm not just talking about patience during a single meeting; requirements analysis is iterative in the same way that code is. Group A says something and then Group B says something that totally contradicts what you heard from Group A. Then Group A explains the inconsistency and it turns out to be something that Group C forgot to mention. Repeat 500 times and you have something roughly resembling truth.

    Unless you're developing some tiny CRUD app (in which case why bother with requirements at all?) then don't expect to get everything you need in one meeting, or two, or five. You're going to be listening a lot, and talking a lot, and repeating yourself a lot. Which isn't a terrible thing, mind you; it's a chance to build some rapport with the people who are inevitably going to be signing off on your deliverable.

  • Don't be afraid to change your technique or improvise.

    Different aspects of a project may actually call for different analysis techniques. In some cases classical UML (Use Case / Activity diagram) works great. In other cases, you might start out with business KSIs, or brainstorm with a mind map, or dive straight into mockups despite my earlier warning.

    The bottom line is that you need to understand the domain yourself, and do your homework before you waste anyone else's time. If you know that a particular department or component only has one use case, but it's an insanely complicated one, then skip the use case analysis and start talking about workflows or data flows. If you wouldn't use the same tool for every part of an app implementation, then why would you use the same tool for every part of the requirements?

  • Keep your ear to the ground.

    Of all the hints and tips I've read for requirements analysis, this is probably the one that's most frequently overlooked. I honestly think I've learned more eavesdropping on and occasionally crashing water-cooler conversations than I have from scheduled meetings.

    If you're accustomed to working in isolation, try to get a spot around where the action is so that you can hear the chatter. If you can't, then just make frequent rounds, to the kitchen or the bathroom or wherever. You'll find out all kinds of interesting things about how the business really operates from listening to what people brag or complain about during their coffee and smoke breaks.

  • Finally, read between the lines.

    One of my biggest mistakes in the past was being so focused on the end result that I didn't take the time to actually hear what people were saying. Sometimes - a lot of the time - it might sound like people are blathering on about nothing or harping about some procedure that sounds utterly pointless to you, but if you really concentrate on what they're saying, you'll realize that there really is a requirement buried in there - or several.

    As corny and insipid as it sounds, the Five Whys is a really useful technique here. Whenever you have that knee-jerk ""that's stupid"" reaction (not that you would ever say it out loud), stop yourself, and turn it into a question: Why? Why does this information get retyped four times, then printed, photocopied, scanned, printed again, pinned to a particle board, shot with a digital camera and finally e-mailed to the sales manager? There is a reason, and they may not know what it is, but it's your job to find out. Good luck with that. ;)

","3249","","3249","","2011-04-14 04:11:02","2011-04-14 04:11:02","","","","1","","","","CC BY-SA 3.0" "68208","1","","","2011-04-14 17:52:53","","1","740","

Mostly fact and maybe a little bit of opinion:
One of my pet peeves in programming is data interchange. I work exclusively with small business software (as opposed to working with corporate ERP systems) and I see that many small businesses store contacts two or three times in different types of software where there is little or no interchange. Time clock systems usually don't integrate with calendaring and project management systems. Task management may be in a system separate from project management, or more commonly, tasks are managed on paper or mentally. These are just a few examples of the data interchange problems I see every day in almost every small business.

It seems every software company has a different idea about how to store it's data and whether or not to expose that data for interchange and integration. If they do expose some of their data there is really no common standard to facilitate interchange or synchronization. You're always forced to read through the SDK/API documentation if you do want to program some type of integration and then it's almost never a painless experience to get it working and keep it working.

It would be really nice to receive an ""electronic document"" from our vendors that would allow us to quickly enter bills into our accounting system. It would be really nice if we hire a new employee that we could make one single entry somewhere, enter his roles and then he would be entered into all the correct systems even if his information would need to be flagged for review in systems such as accounting. It would be so nice if all our contacts were seamlessly shared between systems so that updating an address or phone number in one of them would make the changes in all of them.

Questions:

  1. Do data interchange standards for common entities such as people, purchase orders, sales orders/invoices, shipping documents, projects, tasks, appointments, etc. already exist?

  2. Assuming some do exist, are they commonly accepted, ratified standards?

  3. Would you program a business system to use or follow these standards if the initial requirements of the project did not call for it? In other words, how common or accepted are these standards, enough to be followed every time?

","17302","","13156","","2011-04-14 18:05:35","2015-09-15 10:48:36","Universal Standards for Data Interchange - Do they exist and do you follow them?","","9","0","","2015-09-15 22:57:30","","CC BY-SA 3.0" "68211","2","","68208","2011-04-14 18:12:35","","1","","

1) Do data interchange standards for common entities such as people, purchase orders, sales orders/invoices, shipping documents, projects, tasks, appointments, etc. already exist?

Yes and no. The glib response is ""the nice thing about standards is that there are so many to choose from"".

There are some standards like vCard which is meant for contact management that will work with email applications, but typically not much more than that. Purchase orders and sales orders/invoices are going to be proprietary to the system--a different format for Quickbooks Pro vs what you would use for SAP. You can use iCal for appointments, which seems to work for calendar tools but are a pain to consume in your tool. You might be able to extend iCal to handle tasks (it has a data type for to-do items). As to projects, again those are proprietary.

Part of the problem is that there is no compelling reason to standardize for the people who make these products. When the data interchange is standardized, then that means it is easier to swap out any part of the system. That's a scary proposition for large cumbersome products that want to get you hooked because it would be too expensive and time consuming to change.

2) Assuming some do exist, are they commonly accepted, ratified standards?

vCard and iCal standards are commonly accepted in some settings. Some project management tools will export data using these standards so that a user can have the information on their calendar. However they are rarely used to exchange appointments and contacts system to system. Not sure if there are technical reasons for that, or if it is just something not enough people thought about.

3) Would you program a business system to use or follow these standards if the initial requirements of the project did not call for it? In other words, how common or accepted are these standards, enough to be followed every time?

It really depends on the audience. Every project needs to evaluate whether using the standards are appropriate, make good business sense, and satisfy a real need. In some cases it would be a slam dunk to say ""thou shalt use vCard and iCal for exchanging information, thus sayeth the almighty contractor!"" In other cases the tool is just too small to warrant the overhead--or the client will never use the feature anyway.

Other standards like RSS can be consumed in creative ways, so they also make a lot of sense provided the data supports it. It depends on the data and the need.

","6509","","","","","2011-04-14 18:12:35","","","","3","","","","CC BY-SA 3.0" "68238","2","","68208","2011-04-14 19:23:11","","8","","

Do data interchange standards for common entities such as people, purchase orders, sales orders/invoices, shipping documents, projects, tasks, appointments, etc. already exist?

Yes. That's what EDI is all about.

Start here: http://en.wikipedia.org/wiki/Electronic_Data_Interchange

Assuming some do exist, are they commonly accepted, ratified standards?

Absolutely.

Would you program a business system to use or follow these standards if the initial requirements of the project did not call for it?

No.

In other words, how common or accepted are these standards, enough to be followed every time?

The issue isn't ""common"" or ""accepted"" it's ""cost"" and ""value"". Sometimes EDI doesn't create enough value. Sometimes it's absolutely essential for working in the given industry.

","5834","","","","","2011-04-14 19:23:11","","","","1","","","","CC BY-SA 3.0" "68374","2","","68208","2011-04-15 01:29:25","","1","","
1 Do data interchange standards for common entities such as people,
  purchase orders, sales orders/invoices, shipping documents, projects, 
  tasks, appointments, etc. already exist?

Yes there are for some of these. Sometimes there are competing standards. Usually there are good reasons for these different standards. If you are designing a system, the existing standards are a good place to look for information on attributes you might want to capture and store.

  • For people standards include LDAP schemas, X.500, and vcard. There is an international postal standard for the layout of addresses.
  • For business transactions there are EDI standards. Unfortunately, industries have specific and differing requirements.
  • For appointments the only standard I have used is ical.

There are other data standards I am not aware of or have not listed. I have dealt withe the above standards to some level.

Beyond these, there are a number of underlying standards we use such as ASCII, UTF-8, FTP, SCP, SMTP and others. These are building blocks that make the interchange of data possible.

2 Assuming some do exist, are they commonly accepted, ratified standards?

Lower level standards are all ratified and commonly accepted. When you get into actual data formats, many are ratified or commonly accepted.

3 Would you program a business system to use or follow these standards
 if the initial requirements of the project did not call for it? 
 In other words, how common or accepted are these standards, enough 
 to be followed every time?

The simple answer is they apply to data interchange, and not to business requirements for systems. As such they really aren't relevant to most systems. The standards are most useful when a system needs to exchange data with a large number of systems in other organizations. In that case I would follow the appropriate standards.

When programming a business system, I would refer to the appropriate standards to validate the data model. I would not add any attributes because they were required by an interchange standard. Most systems have little need to interchange data with external systems, and the interchange standards are not the best method to do internal data transfer.

If and when a system was required to communicate with another system, then I would consider how best to build the interface module. An important consideration at that time is how to select and secure the data being transferred. It would be critical to ensure that only the correct subset of data is transferred.

My primary concern in build a business system is what data is required by the business. Data recorded in the system needs to reflect the needs of the business. This may include recording data not reflected in any standards, or omitting data that is required by a standard.

Most of the data interchange systems I have worked with have involved transfer of data from one system to another. While standards are useful in such cases, they may require far more effort than is necessary.

It would be wonderful if I could update my contact data in the one true place and have everyone who needed it get the appropriate parts. However, the one true place would also need to ensure my privacy, and prevent unauthorized access to data. Instead we have a piecemeal process where different systems have different data. Many stores have obsolete data about me, but I am fine with that. I will update the data when I consider it important. However, I may not give all the data they might like.

","17320","","","","","2011-04-15 01:29:25","","","","0","","","","CC BY-SA 3.0" "154676","1","154678","","2012-06-28 14:49:55","","37","7026","

Many larger OSS projects maintain IRC channels to discuss their usage or development. When I get stuck on using a project, having tried and failed to find information on the web, one of the ways I try to figure out what to do is to go into the IRC channel and ask.

But my questions are invariably completely ignored by the people in the channel. If there was silence when I entered, there will still be silence. If there is an ongoing conversation, it carries on unperturbed. I leave the channel open for a few hours, hoping that maybe someone will eventually engage me, but nothing happens.

So I worry that I'm being rude in some way I don't understand, or breaking some unspoken rule and being ignored for it. I try to make my questions polite, to the point, and grammatical, and try to indicate that I've tried the obvious solutions and why they didn't work. I understand that I'm obviously a complete stranger to the people on the channel, but I'm not sure how to fix this. Should I just lurk in the channel, saying nothing, for a week? That seems absurd too.

A typical message I send might be ""Hello all - I've been trying to get Foo to work, but I keep on getting a BarException. I tried resetting the Quux, but this doesn't seem to do anything. Does anyone have a suggestion on what I could try?""

","57899","","31260","","2015-06-24 18:57:08","2015-06-24 18:57:08","Etiquette when asking questions in an IRC channel","","4","8","3","2014-05-05 17:12:15","","CC BY-SA 3.0" "68779","2","","68740","2011-04-16 02:53:43","","68","","

This is the standard answer when developers don't think they will get around to doing something in any reasonable timeframe, but it's been repeatedly brought up.

It's most unfair when it's been repeatedly brought up, but the person who's most recently mentioned it doesn't know that, and just gets ""we are taking patches for that"" right away. In this case the maintainer is fed up with the discussion but the user thinks it's a new topic. Anyhow, most likely if you get ""taking patches"" right away, you shouldn't take it personally but might want to read over the archives and bug tracker for more details on the issue.

If you are repeatedly bringing up a request yourself, ""taking patches"" is potentially intended to be a relatively polite brush-off, vs. some less polite alternatives...

And then of course there are rude maintainers who will say ""taking patches"" with no explanation ever to anyone, but I'd say that's a minority.

If you've ever maintained an open source project with a lot of users, you'll know that there are 100x more requests than the maintainers could ever get to, and many of those requests are important to the requester but would be outrageously difficult, or would disrupt a lot of other users, or have some other flaw that's only visible with a global understanding of the project and codebase. Or sometimes there are just judgment calls, and it takes too much time to argue every one over and over.

Most non-open-source companies will not give you access to the developers at all, and you'll just get the silent treatment or a polite but bogus story from customer support. So, in open source at least you have some options (pay someone to code the feature, etc.) and while developers might be rude, at least they give straight answers. I'd rather have ""no"" than the usual ""it's on our roadmap... [2 years later] ... it's still on our roadmap"" kind of thing I've gotten from a number of vendors...

So I don't think there's a retort. Maybe the open source maintainer is just really busy, maybe they're a jerk, but either way, they likely have a tough job and getting into a who-has-the-last-word debate isn't going anywhere. The best you can do is contribute in some way and try to be constructive.

Maybe it isn't code, but possibly there's a lot of analysis and documenting user scenarios you could do. When I was maintaining the GNOME window manager, lots of times it would have been helpful for people to go analyze a problem globally considering all users, and really write down the issues and pros and cons and what should happen from a global perspective.

(Instead, the usual thing was to start flaming as if they were the only user that mattered and there were no tradeoffs. And while that's great, and was a datapoint, and often I managed to stay polite or even solve their problem eventually... flaming does not make anything happen more quickly. It just confuses emotions into the issue and wastes everyone's time.)

","6669","","6669","","2011-04-16 03:02:11","2011-04-16 03:02:11","","","","12","","","2011-04-16 05:10:34","CC BY-SA 3.0" "155710","1","","","2012-07-05 12:43:59","","2","740","

I have joined an ongoing project, where the team calls their architecture ""component-based"". The lowest level is one big database. The data access (via ORM) and business layers are combined in various components. E.g., there's a component for handling bank accounts, one for generating invoices, etc. So every component contains the data access to only a part of the schema. My issue is the coupling of data access and business logic in such a structure, because while such a partition makes sense for business logic, it complicates data access.

From my point of view the separation of the data access layer into various components seems counterproductive, because it denies us the relational mapping capabilities of the ORM. E.g., when I want to query all invoices for one customer I have to identify the customer with the ""customers"" component and then make another call to the ""invoices"" component to get the invoices for this customer. The entity Customer can't have an Orders property, because Orders are mapped in a different component.

Does anybody have some advice? Have I overlooked something?

","58529","","58529","","2012-07-05 15:59:50","2012-07-05 15:59:50","ORM and component-based architecture","","2","0","","","","CC BY-SA 3.0" "155996","2","","155488","2012-07-08 07:46:29","","2","","

Do the analysis first.

I would do some analysis before deciding what to teach. Figure out where the biggest pain points are. Use those to prioritize what practices to go over.

Introduce only a few changes at a time (in a similar situation I did 2-3 practices every 2 weeks).

I would limit the practices to ~3 depending on level of change to there programming style of SDLC; until they start to get comfortable with them (I would push to introduce 1 new change every ~1-2 weeks as they get more comfortable with the idea of learning new approaches). It's also a good idea to identify what the criteria for success is. What the practice should accomplish (even if it's a soft goal like team morale). That way you can show if it's effective or not.

  • Why limit the number of changes?

Even if you assume these people want to be better programmers and are open to learning there are limits to how much and how fast people can learn new concepts and apply them; especially if they don't have CS foundation or have participated in a Software Development Life Cycle previously.

Add a weekly wrap-up meeting to discuss how the practices affected them.

The meeting should be used to discuss what went well and what needs work. Allow them to have a voice and be collaborative. Discuss and make plans to address problems they are having and to preview the next changes coming up. Keep the meeting focused on the practices and their application. Do a little evangelizing on the benefits they should start to see from applying the practices.

Certain practices take precedence.

Proper use of a version control system (IMO) trumps everything else. Close behind are lessons in modularization, coupling/cohesion and feature/bug ticket tracking.

Remove practices that don't work.

Don't be afraid to get rid of practices that don't work. If there is a high cost and little to no benefit, remove the practice.

Improvement is a process.

Convey that sustained, consistent improvement is a process. Identify the biggest pain points, apply a solution, wait/coach and then repeat. It will feel agonizingly slow initially until you build up some momentum. Keep everyone focused on the improvements that are coming and the improvement that are already successful.

","7575","","7575","","2013-06-15 04:32:23","2013-06-15 04:32:23","","","","0","","","2012-07-08 07:46:29","CC BY-SA 3.0" "70888","2","","40508","2011-04-24 15:36:42","","1","","

Solve the general issues first: we need a web server, app server, DB, etc.

For the debates about which DB or server to use, park those items for another meeting.

During the subsequent meetings, allow for discussion to ""short list"" the potential offerings e.g. MySQl, MS SQL Server, Postgres, etc.

Allow team members to voice their opinions, but request that they back them up with facts. Product X sucks! Doesn't cut it, Product Y doesn't scale! Is too vague. Etc.

Once all the details are out and on the table you need to either put it to a vote or as team lead make an executive decision.

If you need to to flush out a clear winner or confirm support/lack of for a feature/concept feel free to take some time to do a POC (Proof Of Concept) but realize this will take time and there is a tendency for developers to want to run with whatever they have started with... Be sure to verify any roadblocks/tech concerns before going with the POC.

","3199","","","","","2011-04-24 15:36:42","","","","0","","","2011-11-04 10:44:04","CC BY-SA 3.0" "263249","1","263311","","2014-11-19 17:35:55","","1","171","

My company is working on developing a new product that is similar (but larger in scope) than our existing, primary application. It will incorporate most of the functionality of our current application (except where that functionality isn't currently being used) and eventually supplant it.

Our initial thought was to solicit product suggestions and feedback on mock-ups from certain members of our current clientele who have made multiple feature suggestions in the past. This seemed good in theory, as we could get a larger perspective from our projected user base as well as getting buy-in from those customers who might have had feature requests they previously made indefinitely postponed.

In practice what we're getting is a lot of silence. We have meetings with this group every two weeks to get their feedback and suggestions. However, they rarely - if ever - have anything to add. We've tried emailing out to them a few weeks in advance, asking for their suggestions on a certain feature that we intend to design and mock-up for them on x date, and no suggestions come back to us. When we ask for feedback on existing mock-ups during the meetings, the response is always along the lines of ""Oh, that looks good...I'll get back to you if I think of any problems."" This leaves us extremely anxious.

TLDR: The concern is that while our design may look great to us and our business team, our customer base won't like it (there are many areas it is radically different, based on things we implemented poorly the first time around). Can anyone offer any suggestions on ways we can better solicit feedback?

Edit: Please note that I'm not asking how to proceed without client involvement. Rather, I'm looking for suggestions on how to convince people that it's in their best interest to be more involved.

","102390","","-1","","2017-04-12 07:31:29","2014-11-20 19:25:54","How to solicit new product recommendations from existing clients?","","2","9","0","","","CC BY-SA 3.0" "263825","2","","263803","2014-11-26 12:53:29","","3","","

I use two things - my voice and a whiteboard.

I mean, as much as developers hate meetings, you do need to work as a team. Standups are a great time to talk about new stuff you're doing in code (and to ask if something already exists). For big things, the sprint planning meeting is a good time for the entire team to provide input to the design, so everyone knows what is going on. And the sprint demo meeting is a good time to go into short detail about how the design has changed since the beginning of the sprint. In my experience, a whiteboard is the best tool for communicating that design amongst developers. Too few are proficient enough at reading code, and nobody reads (quickly obsolete) documentation.

And for particularly big/complex/subtle things, you'll likely need to have a dedicated meeting to disseminate that info.

Code reviews can help be a last line of defense against duplicated effort or misuse/mis-modification of code. But they catch things after all of the work is done. By having good teamwork, you prevent the wasted work from being done altogether.

Don't use agile/scrum? First, my condolences. Second, the premise still holds. You need to talk as a team - daily for little things. And you need to coordinate the bigger things periodically so that you work as a team.

","51654","","","","","2014-11-26 12:53:29","","","","2","","","","CC BY-SA 3.0" "72969","2","","72967","2011-05-02 11:30:54","","79","","

I'll try to list a few things¹ I wish I thought about when creating my company.

The essential thing to know is that either you have to hire people (lawyers, accountants, salesmen, project managers), or you have to learn lots of stuff yourself, given that trial and error technique would often cost you a lot of money.

  • Be aware of the local laws. When you're a small company and you're sued by your customer for thousands of dollars because some mandatory sentence is missing from your invoice, it's not obvious to handle.

    In the same way, when a customer doesn't pay you for months, when you go to a lawyer and learn that the contract you signed doesn't force your customer to pay you, you wish you had consulted a lawyer before signing anything. I spent four years in law college; I'm always surprised by the poor quality of contracts written by people with no knowledge in law. Most of the contracts I've seen clearly say that the developer may never be payed, or that the customer can request any change at no cost.

    Remember, some customers will spend a huge amount of time trying not to pay or to pay less. They will invoke the fact that your product does't match their expectations, or that they always thought that the changes you made at their request were for free, or that they don't need the product any longer. Make sure to see F*ck You. Pay Me. by Mike Monteiro which discusses such situations.

    This is a job of a lawyer. Lawyers are expensive, but they save you money.

  • Be sure that the taxes will not be higher than your income. In France, for example, when you start you can easily be in the situation where multiple semi-governmental organizations (such as the mandatory insurance company) will claim thousands of dollars per year, yet your income is several hundreds of dollars per year.

    Nobody cares by such nonsense, because it's a way for those organizations to make a lot of money. Even when you don't have any income, you still have to pay. Given that some of them are managed as insurance companies and benefit from their monopoly, you find yourself in front of an entity which behaves much like mafia (i.e. no matter what's your situation, you'll have to pay), but sometimes without the cover benefits.

    Seeing taxmen arrive at your company and asking to check the accounts, then finding a few mistakes which will cost you a few thousands of dollars is not a nice thing neither.

    This is a job of an accountant: avoiding accounting errors which usually cost too much, and defend the money of your company from the intentional errors of powerful entities.

  • What makes you better than all the freelance developers? What makes you better than all the larger software development companies? How do you explain to the customers that you're better?

    I had a few discussions with my colleagues who wanted to create their own companies. ""What do you have that others don't?"", I asked every time. Either they can't answer, or they answer something like ""I'll ask for a lower price"", but they are unable to explain how would they do the cost savings.

    Be sure you know the aspects in which you are better than the competitors. Be sure you are able to market yourself, explaining not only what's better, but also why.

    • Example: a company A ships software at a lower cost, because they use lean management, removing the waste related to tasks which are not needed in order to deliver the product.

    • Another example: a company B ships high-quality software by using intensive formal code reviews, testing, formal proof, and other techniques used in companies writing live-critical software.

    • Last example: a company C delights its customers by using radical management and Agile.

    More importantly, how you will find your customers? Do you advertise? Where? How? How much would it cost?

    Are you ready to answer customers' questions? For example, if somebody asks for the names of companies you worked before in order to ask those companies for feedback, or if somebody asks to show the software products or web apps you've done, do you have an answer?

    This is a job of a salesman: somebody who knows your business, knows your strong points, and can quickly, easily and honestly explain why your company is the best.

  • How do you avoid shipping the project late, when the customer constantly asks for changes in the features you just delivered?

    How do you calculate the price the customer has to pay? If you're paid per hour of work, how can the customer be sure that you don't ask to be payed for 213 hours when in fact you worked 186 hours?

    How do you keep track of a project? How do you know that the project is about to fail, and when you know it, how do you prevent it?

    This is a job of a project manager. Leading a project from ""I have a great idea, it's in my head now"" to the fully-featured product requires more than knowing how to write programming code.

  • Are you sure you're ready to deal with customers? What will happen when a customer is not polite? What if a customer says that your product sucks or does not conform to the requirements when in fact it follows them exactly? What if a customer, after two months of development of a three months project tells you that you must rewrite your ASP.NET project in PHP? What if the customer doesn't even know what her project is about?

    This, again, is a task of the project manager, the salesman or the support. Dealing with customers after you signed the contract requires a lot of tact, patience, professionalism and, often, anger-management.


¹ Note: my company is in France, so some points may not apply or be less important in other countries.

","6605","","6605","","2014-04-16 07:13:09","2014-04-16 07:13:09","","","","2","","","","CC BY-SA 3.0" "158437","2","","158435","2012-07-27 09:20:13","","3","","

yes you should use some kind of project managment tool, even as a sole developer. but your primary goals are different, as a team your goal is to keep everyone up-to-date. as a sole developer you are by definition always up-to-date, your goal is to free your mind. freeing it form stuff that needs to be done, but not now.

you can reach this goal by simple writing task's down, in basecamp, excel, an a sheet of paper, .. it doesn't matter just free your brain.

for my projects i am using Trello which is a fantastic tool for my use-case, primary because it doesn't impose a specific workflow on me but it gives me the power to create my very own which fit's my needs.

here is a great blog posts from uservoice which show's how trello can be used in software development.

","27400","","","","","2012-07-27 09:20:13","","","","2","","","","CC BY-SA 3.0" "158515","2","","158475","2012-07-27 19:28:06","","6","","

When I was a dev lead last time (full of juniors, most of them were last year students on CS MSc), I had a third approach:

  1. Give the story to the junior dev
  2. Tell him to come back with a plan how it would be solved
  3. After reviewing the plan, but only after, start editing files.

I don't say it worked quite well.

For simple tasks, like, add a new column to a report table, they did a huge mess, editing all the files, then they said ""it's already done"", then we reviewed it, turned out it's not the way it should've been done, they had to revert nearly everything (1) and then start again.

On big tasks (5+ classes involved), it was on the other side: they were simply silent for a day or two, then we had to go through it together, and basically I drew up a design which they had to implement.

Although I was there and for every single decision I explained it to them fully, and they had the required readings before, and it was more like a demonstration (2), at the end of the day, they still had to implement my design mostly, and they weren't that happy about it as when they're let free.

I know it's hard to differentiate planning from doing, but I guess it's called software engineering because there are still a few individuals left who know what they do, as opposed to craftsmen. My duty as their trainer was not to create another average coder, but someone who excels in his profession. Luckily, today, all of these are teamleaders (or were, but joined a different, more ""agile"" company with no leadership)

So, all in all, I don't know what's best, I only know that code quality is more important than junior's convenience - they're there to learn, not to let any kind of code to be commited to prod...

(1) (""Why? It works, doesn't it?"" ""Yes, but you're querying database from the view layer or such"")

(2) (""you see? we need this, and we usually use this pattern for this as you seen in this other similar module; here, our coding guideline (which was 2 pages long) says, do this, so, let's do this, and then ,let's draw a sequence diagram on how it would work... ok, now let's draw a class diagram about it"")

","60125","","","","","2012-07-27 19:28:06","","","","2","","","","CC BY-SA 3.0" "158535","1","","","2012-07-27 23:35:53","","17","1732","

I'm working at a company that would score 11 on Joel Test - at least on paper.

In practice, however, nothing works quite as well as expected, and the project has been on DEFCON 1 for half a year. Now, most of my peers are happy if they can go back home at 6pm - on Sunday.

One of the apparently good practices that struck me as not working is the use of static analysis tools. The project both tracks gcc -Wall warnings and a proprietary and very expensive ""C/C++"" tool.

Gcc warnings do more often than not point to real(if most of the time inoffensive) bugs.

The proprietary tools, however, list things such as implicit casts and sizeof'ing a string literal. Implicit casts are also blacklisted on their stylebook.

The standard practice is that people is pressed to make every single warning shut up. Note that this does exclude warnings that are predominantly false positives, this is not the problem.

The result is:

  1. People add type casts to every rvalue and to every argument hiding real problematic type mismatches in the process.
  2. People introduce off by one bugs, or use a different problematic language feature.(strlen instead of sizeof, strncpy instead of strcpy, etc.)
  3. The warnings are silenced.
  4. The bug reports start rolling in.

The main point is the original code was working and written by people who were playing safe within their language abilities whereas the fixes were not.

Now, I don't really think this company can be saved. However, I would like to know if there is a better, preferably working, way to use the ""pro"" tools or if I should just avoid using them altogether in case I am the one making the decision in the future.

A solution which doesn't assume all programmers are geniuses that can't err. Because well, if they are, then there is no need to use the tools in the first place.

","","user2582","","","","2015-12-03 13:26:50","How to avoid the pitfalls of static analysis","","7","3","1","","","CC BY-SA 3.0" "266437","2","","266425","2014-12-14 21:18:51","","12","","

Kinds of objects

For purposes of our discussion, let's separate our objects into three different kinds:

Business Domain logic

These are the objects that get work done. They move money from one checking account to another, fulfill orders, and all of the other actions that we expect business software to take.

Domain logic objects normally do not require accessors (getters and setters). Rather, you create the object by handing it dependencies through a constructor, and then manipulate the object through methods (tell, don't ask).

Data Transfer Objects

Data Transfer Objects are pure state; they don't contain any business logic. They will always have accessors. They may or may not have setters, depending on whether or not you're writing them in an immutable fashion. You will either set your fields in the constructor and their values will not change for the lifetime of the object, or your accessors will be read/write. In practice, these objects are typically mutable, so that a user can edit them.

View Model objects

View Model objects contain a displayable/editable data representation. They may contain business logic, usually confined to data validation. An example of a View Model object might be an InvoiceViewModel, containing a Customer object, an Invoice Header object, and Invoice Line Items. View Model objects always contain accessors.

So the only kind of object that will be "pure" in the sense that it doesn't contain field accessors will be the Domain Logic object. Serializing such an object saves its current "computational state," so that it can be retrieved later to complete processing. View Models and DTO's can be freely serialized, but in practice their data is normally saved to a database.

Serialization, dependencies and coupling

While it is true that serialization creates dependencies, in the sense that you have to deserialize to a compatible object, it does not necessarily follow that you have to change your serialization configuration. Good serialization mechanisms are general purpose; they don't care if you change the name of a property or member, so long as it can still map values to members. In practice, this only means that you must re-serialize the object instance to make the serialization representation (xml, json, whatever) compatible with your new object; no configuration changes to the serializer should be necessary.

It is true that objects should not be concerned with how they are serialized. You've already described one way such concerns can be decoupled from the domain classes: reflection. But the serializer should be concerned about how it serializes and deserializes objects; that, after all, is its function. The way you keep your objects decoupled from your serialization process is to make serialization a general-purpose function, able to work across all object types.

One of the things people get confused about is that decoupling has to occur in both directions. It does not; it only has to work in one direction. In practice, you can never decouple completely; there is always some coupling. The goal of loose coupling is to make code maintenance easier, not to remove all dependencies.

","1204","","-1","","2020-06-16 10:01:49","2014-12-15 16:40:07","","","","8","","","","CC BY-SA 3.0" "372624","2","","372600","2018-06-15 21:10:11","","2","","

There are several ways to collect statistics automatically, but the problem is how to get that information back. If you choose to collect statistics automatically, then I recommend the following:

  • Have the statistic gathering easily turned on or off.
  • Allow the user to control where the statistics are stored.
  • Provide the tools for users to inspect and make use of that data themselves. Chances are that people who use your library are just as interested in how much it is used as you are. This keeps you in the open source mentality
  • Make the submission of that information voluntary, or part of your bug reporting

Things that will severely limit who can use your library are:

  • Automatic transmission to an undisclosed server
  • Assumption that the library will even be used on a network that can connect to the internet

Security audits look for things like that, and if your library is considered a risk, you gain a very bad reputation that is hard to shake.


All that said, the most reliable way to determine if you have users that use a particular feature is to threaten to remove it. It won't make you popular, but the silent users will speak up. If no-one says anything, it's a safe bet to remove. If they do, then you have the option to start a dialog to see what the real needs are and if there is a better way to resolve it.

","6509","","","","","2018-06-15 21:10:11","","","","0","","","","CC BY-SA 4.0" "373441","2","","373439","2018-07-01 21:41:32","","8","","

What is a technical documentation ?

The real definition: Technical documentation means any document that common mortals do not understand because of some required specialized knowledge.

The bad news is that it won't help you to determine what to put in it. The good news is that you can from now on use the concept yourself to qualify anything that you do not understand: "Uh! These accounting guidelines seem to be a very technical document" (and all those except the accountants will nod, silently agreeing with you).

What is the objective of your documentation?

The real question: For technical writing, as for any writing, the first question is to know what is the target audience, and what the primary purpose of this documentation is:

  • Is it for new team members ? The most important thing is to give an overview (e.g. architecture and layers, main components), a high level domain model (i.e. context map), as well as some hands on information (e.g. directory structure, toolset used, naming convention, link to other important documents to read). The details will anyway be in the code be it in self-explanatory clean code or useful comments.
  • Is it for library users ? Your javadoc or doxygen will generate a suitable reference documentation based on comments that are embedded in your code (so hopefully easy to maintain). Unfortunately, this detailed information will not allow to understand easily the design of your library. Again, you need to provide some high level overview on the library's architecture, and how its different components interact and depend on each other. This kind of documentation is a MUST HAVE if your library is a commercial product sold on its own.

A fatal assumption would be to think that you could do a "technical documentation" that would cover any technical needs. The level of details to be understood by the team (that has to know the internals) and the users (who need to understand the use cases and the interface) is often very different.

Some advices

Grady Booch exposed in his book "Object Oriented Analysis and Design Technique", the content of a desired software documentation:

  • High-level system architecture
  • Key abstractions and mechanisms in the architecture
  • Scenarios that illustrate the as-built behavior of key aspects of the system

He further made a very specific point:

It is far better to document these higher-level structures, which can be expressed in diagrams of the notation but have no direct linguistic expression in the programming language, and then refer developers to the interfaces of certain important classes for tactical details.

","209774","","-1","","2020-06-16 10:01:49","2018-07-01 21:53:36","","","","0","","","","CC BY-SA 4.0" "373470","2","","373467","2018-07-02 16:24:19","","14","","

People have been using IDEs to write Java code almost as long as Java has been a mainstream language. NetBeans, Eclipse, and IntelliJ IDEA all existed long before Java 1.5. People made mistakes then, and they continue to make mistakes now.

The bottom line is that anything you can do to minimize those mistakes, particularly when they are relatively low cost, you owe it to yourself to do.

A more common mistake than simply misspelling a method name is removing a method from a base class without thinking of the consequences.

Example (before change):

class Vehicle {
    public void drive() {}
}

class Car {
    @Override
    public void drive() {}
}

After a time, the folks who were maintaining the Vehicle class decide that driving shouldn't be something that a vehicle does, but something a driver does. So they take the logic out of Vehicle.drive() and move it to Driver.drive(Vehicle vehicle).

Now, you start to see the value of the @Override method. As soon as that team did that and recompiled, they would find out all of the places that need to be modified, or at least need to be accounted for in the Driver class. Without the @Override attribute, all of the sub-classes would silently compile and you would find subtle bugs that you now had to work through. In this example Cars would not drive like a car, but like a generic driver would.

","6509","","","","","2018-07-02 16:24:19","","","","0","","","","CC BY-SA 4.0" "75854","2","","75809","2011-05-12 13:57:12","","8","","

Here's a couple tricks:

  • Learn the team's current state and history - it sounds like they have a mentor, how much influence does the mentor have? Also, how new is the mentor and was there a long time with no mentor? When does the problem code originate? Criticizing the current team's baby can be a lot different than criticizing some old code that no one actually remembers writing.

  • One thing at a time - don't drop the bomb on all your thoughts at a team meeting. Start with some tentative questions that come from your specific perspective. For example - ""Hey, as the new guy, I noticed that some of the utility classes are really big, is there a reason for that?""

  • Suggest baby steps - it's almost never possible to do an immediate total overhaul, so figure out some starting steps to suggest in case everyone agrees that this is a good plan.

  • Suggest future prevention mechanisms - for example, the team could agree to a goal that it will never add to the top few largest classes, but will refactor when there's a need to grow them further.

  • Listen to concerns about risk. If this is really legacy code, there may be enough unknowns and dependencies that refactoring is extremely risky. That may not be a reason to avoid refactoring, but it may mean you need some better test strategies or some other way to reduce risk before you tackle the real rework.

  • Be aware of body language and go slow. You're bringing up a problem in a code base that you haven't had a lot of experience with. You have a new guy window right now, where you can ask some naive questions and get helpful answers, and you can use those questions to probe the team to consider their own design choices. But it goes both ways - as the new guy, you also don't have a ton of ""cred"" yet, so go slow and be aware of closed faces or postures. If people start shutting down, suggest a way to delay any decisions and look for ways to win them over.

I can say as a manager and team member, I've been glad for New Guy Insights. I didn't accept every single piece of constructive commentary that a new team member gave me, but I was generally willing to listen if the criticism was voiced as honest concern and curiosity and not delivered as a lecture. The mark of respect to the new guy goes when he can deliver the insight and then step back and handle whatever comes - it's easy to feel good when your decisions are heard and taken up, it's harder when the team tells you ""no"". You may still be right, the trick is to figure out what to do next... usually waiting a bit and looking for more information is a good next step in those cases.

","12061","","","","","2011-05-12 13:57:12","","","","0","","","","CC BY-SA 3.0" "267416","2","","225931","2014-12-26 19:29:34","","16","","

Advantages, disadvantages, and limitations of your technique:

  • If the calling-code is to handle the checked exception you MUST add it to the throws clause of the method that contains the stream. The compiler will not force you to add it anymore, so it's easier to forget it. For example:

    public void test(Object p) throws IllegalAccessException {
        Arrays.asList(p.getClass().getFields()).forEach(rethrow(f -> System.out.println(f.get(p))));
    }
    
  • If the calling-code already handles the checked exception, the compiler WILL remind you to add the throws clause to the method declaration that contains the stream (if you don't it will say: Exception is never thrown in body of corresponding try statement).

  • In any case, you won't be able to surround the stream itself to catch the checked exception INSIDE the method that contains the stream (if you try, the compiler will say: Exception is never thrown in body of corresponding try statement).

  • If you are calling a method which literally can never throw the exception that it declares, then you should not include the throws clause. For example: new String(byteArr, ""UTF-8"") throws UnsupportedEncodingException, but UTF-8 is guaranteed by the Java spec to always be present. Here, the throws declaration is a nuisance and any solution to silence it with minimal boilerplate is welcome.

  • If you hate checked exceptions and feel they should never be added to the Java language to begin with (a growing number of people think this way, and I am NOT one of them), then just don't add the checked exception to the throws clause of the method that contains the stream. The checked exception will, then, behave just like an UNchecked exception.

  • If you are implementing a strict interface where you don't have the option for adding a throws declaration, and yet throwing an exception is entirely appropriate, then wrapping an exception just to gain the privilege of throwing it results in a stacktrace with spurious exceptions which contribute no information about what actually went wrong. A good example is Runnable.run(), which does not throw any checked exceptions. In this case, you may decide not to add the checked exception to the throws clause of the method that contains the stream.

  • In any case, if you decide NOT to add (or forget to add) the checked exception to the throws clause of the method that contains the stream, be aware of these 2 consequences of throwing CHECKED exceptions:

    1. The calling-code won't be able to catch it by name (if you try, the compiler will say: Exception is never thrown in body of corresponding try statement). It will bubble and probably be catched in the main program loop by some ""catch Exception"" or ""catch Throwable"", which may be what you want anyway.

    2. It violates the principle of least surprise: it will no longer be enough to catch RuntimeException to be able to guarantee catching all possible exceptions. For this reason, I believe this should not be done in framework code, but only in business code that you completely control.

References:

NOTE: If you decide to use this technique, you may copy the LambdaExceptionUtil helper class from StackOverflow: https://stackoverflow.com/questions/27644361/how-can-i-throw-checked-exceptions-from-inside-java-8-streams . It gives you the complete implementation (Function, Consumer, Supplier...), with examples.

","161513","","-1","user40980","2017-05-23 12:40:15","2016-02-25 16:31:34","","","","0","","","","CC BY-SA 3.0" "160261","2","","160128","2012-08-09 21:39:06","","2","","

I think there are a few problems on both ends. It may be either of the scenarios you presented.

Scenario 1: Intern is underperforming

Working remote is the flag here. It takes a very disciplined individual to actually work remotely. With little experience, the intern may not have the combination of motivation and knowledge required to perform well remotely. If you need a remote position, this intern is not for you.

Further, you mention the fun aspect. I know it's more rare, but why not look for someone who thoroughly enjoys the tasks you're assigning them. Honestly, SCRUM and TDD can be very fun, it just takes the right mindset. Yes, they are few and far between, but there are people out there who actually enjoy learning new CS technologies and methodologies (= Perhaps you need to continue your search.

Scenario 2: You are not mentoring close enough

I believe if this was the case, you'd know. If the intern was motivated, he'd come to you with questions. He would ask you when he doesn't understand something. However, if he's writing bad code and you never said anything, then you might need to speak up more. The deal is, with his experience, he's not going to know all the time if he's writing bad code. So if you never spoke up when you saw issues, you only have yourself to blame.


Whatever the case, there is nothing wrong with deciding the intern is not right for the job or is not ready yet. They are young and have much to learn. They may need a few more years to just be a student before being really motivated to do professional work. If you feel the problem is fixable, give it a shot over the next few weeks and try to keep the intern. If not, you both had a summer of learning and that's that.

My personal opinion: If I lay down guidelines and requirements and someone isn't meeting them, I'm not going to want to work with them further because I know and work with people who love what they do, even the CRUD, and who are noticeably motivated to make a great product. Anyone like this is bound to succeed (= In the end though, the one way to get to the bottom of the issue is to talk with the intern. Try to have a progress meeting (or something) where you get to talk to the intern before making any decisions.

","50424","","","","","2012-08-09 21:39:06","","","","0","","","","CC BY-SA 3.0" "373766","2","","373763","2018-07-08 06:37:09","","2","","

There is no industry standard.

The diff-and-patch workflow originated in the mid-80s in the context of open-source Unix development. You could send diffs around via email and then apply that patch to your local code. This was influential in the development of version control systems like Git.

Other ecosystems have entirely different conventions. Also, other ecosystems tend to be less text- and command-line oriented than the Unix/Linux/Open-Source ecosystems. Therefore it is a red herring to try to find the command line switch that will make TFS accept your patch.

When working with other people, you will have to negotiate a common workflow. Sending whole files around is the lowest common denominator and isn't necessarily a bad idea – if everyone uses some version control system locally they can copy the file into their source tree and quickly see the changes. In Git you can start a branch off the last common state, apply their changes, and then merge them into your common state to prevent conflicting changes from being silently overwritten.

In such a workflow, the value of sending around patches instead of whole files is quite diminished: while patches let you see the changes at a glance and save you bandwidth and mailbox storage all of that isn't crucially important (as compared to, say, productively collaborating).

I still suggest that you explain every change so that the other party doesn't have to search themselves: a kind if natural-language diff or very detailed commit summary. E.g.: “in foo.sql, I changed the type of the product_category column from varchar to an enum because $requirements”.

","60357","","","","","2018-07-08 06:37:09","","","","3","","","","CC BY-SA 4.0" "160324","2","","160308","2012-08-10 11:36:59","","3","","

You're on the right track when you say you need voice and remote desktop software, irrespective of whether you're going to be using Visual Studio or other tools to collaborate. I don't use Visual Studio myself, so I can't answer the part of your question about any tools integrated with it, but I do work as part of a distributed team and there are a whole heap of choices out there for you to pick from.

For VoIP, Skype is a common choice and one I've used a lot in the past, but I have to say I'm not a great fan of what it's become (ads and the beautiful simple interface has been replaced with a rather counter-intuitive one), so I tend to use either Trillian to talk to my Skype contacts, or C3 (which is actually intended for online gamers but is also great for general VoIP communication, is much less of a resource and bandwidth hog and is completely free). I found Google Talk's ""feature"" of asking you ""Are you still there?"" after a couple of hours while you're clearly still talking a bit annoying, as there's usually no point in keeping the tab in the foreground, so often we missed the question and got thrown out. Quality-wise, there wasn't much between the three on a broadband connection; if anything, I'd give the edge to C3.

As for web conferencing (or desktop sharing) software, which you'll need in order to view each other's desktops and control each other's mouse/keyboard for paired programming, I've used Netviewer commercially (my client had a license) in the past (before they were bought up by Citrix) and more recently TeamViewer, which is similar from a pure desktop sharing point of view but seems to have a few less features (or maybe I just haven't discovered them yet). We are also considering OpenMeetings but I haven't used it much yet so can't make an informed recommendation on that one.

Wikipedia's comparison pages seem to be kept quite up-to-date if you'd like more options to pick from:

http://en.wikipedia.org/wiki/Comparison_of_VoIP_software

http://en.wikipedia.org/wiki/Comparison_of_web_conferencing_software

Most of the commercial ones tend to have at least free trials, so make sure you try before you buy.

Once you've got the right tools set up, there's not that much difference between doing XP while sitting next to each other and while sitting in different parts of the world. (And there are actually benefits, e.g. you can't knock over the other guy's coffee cup and you can keep your own favourite keyboard and mouse settings.)

","45614","","","","","2012-08-10 11:36:59","","","","3","","","","CC BY-SA 3.0" "375278","2","","375256","2018-07-16 14:41:02","","2","","

The real world example you provieded, DueInvoices lends itself very well to the concept that this is a collection of invoices that are currently due. I understand completely how contrived examples can get people wrapped up around the terms you used vs. the concept you are trying to convey. I've been on the frustrating end of that myself multiple times.

That said, if the purpose of the class is strictly to be an IEnumerable<T>, and doesn't provide any other logic, I have to ask the question whether you need a whole class or can simply provide a method off of another class. For example:

public class Invoices
{
    // ... skip all the other stuff about Invoices

    public IEnumerable<Invoice> GetDueItems()
    {
         foreach(var line in File.ReadLines(_invoicesFile))
         {
             var invoice = ReadInvoiceFrom(line);
             if (invoice.PaymentDue <= DateTime.UtcNow)
             {
                 yield return invoice;
             }
         }
    }
}

The yield return works when you can't just wrap a LINQ query, or embedding the logic is easier to follow. The other option is simply to return the LINQ query:

public class Invoices
{
    // ... skip all the other stuff about invoices

    public IEnumerable<Invoice> GetDueItems()
    {
        return from Invoice invoice in GetAllItems()
               where invoice.PaymentDue <= DateTime.UtcNow
               select invoice;
    }
}

In both of these cases you don't need a full wrapper class. You just need to provide a method and the iterator is essentially handled for you.

The only time where I needed a full class to handle the iteration was when I had to pull blobs out of a database in a long running query. The utility was for a one time extraction so we could migrate the data elsewhere. There was some weirdness I encountered with the database when I tried to stream the content out using yield return. But that went away when I actually implemented my custom IEnumerator<T> to better control when resources were cleaned up. This is the exception rather than the rule.

So in short, I recommend not implementing IEnumerable<T> directly if your problems can be solved in either of the ways outlined in code above. Save the overhead of creating the enumerator explicitly for when you cannot solve the problem any other way.

","6509","","","","","2018-07-16 14:41:02","","","","2","","","","CC BY-SA 4.0" "268127","2","","268063","2015-01-05 22:46:07","","7","","

It's perfectly possible to do this. I've written a blog with a few guidelines, but these additional ones might help you too:

  • Think about the capabilities of the system. For instance, an accounting system might have the ability to read bank feeds, raise invoices, email those invoices, etc.

  • Group the scenarios in terms of the capabilities. Look at what kind of contexts (the givens) produce what kinds of outcomes (the thens). Have some conversations with the business people about these, and pick up their language as far as possible. The capabilities themselves will drive what you write for the events (the whens).

    For instance, you might find a couple of scenarios for raising invoices where it says something like:

    Given an organisation to bill is outside the US
    When we send the invoice
    Then international bank details should be included.

    Given an organisation to bill is within the US
    When we send the invoice
    Then it should include only US bank details.

    These will then tie into the automation that does the more detailed steps that actually create organisations with different addresses in different countries, send the invoices, and verify that those invoices have been sent with the correct bank details. There will be far more automation steps than there will of these higher-level ones. This is commonly referred to as declarative vs. imperative language, and will help you to work out which scenarios are the most important to cover, and which are functionally equivalent.

    Notice that the difference between the scenarios is called out fairly cleanly here, which it wouldn't be if there were multiple UI steps hiding that difference. The difference between the scenarios is what illustrates the behaviour.

  • You are likely to find bugs. It's up to you if you want to write scenarios around what the system should do. It's highly likely though that by now there are some human workarounds, so I wouldn't worry too much about this behaviour. If the application is in the wild and producing value, it's good. Make sure you get scenarios around the core capabilities written first.

  • Whenever you have to fix a bug, write some unit tests. This will force you to redesign your code. Regression bugs are usually caused by poor design, and adding yet more scenarios will just make the code harder to change rather than give you any more confidence in it. This is what Michael Feathers is primarily referring to here. See also the test pyramid. As your system is refactored, the number of unit tests and integration tests should be rapidly outstripping those at the UI level.

  • You can use the different capabilities of the system to guide you in finding the seams which Michael talks about in his book, which will help you to refactor.

  • Note that I don't use the word test very often. Business people will tend to talk more comfortably about the behaviour of the system when you talk about examples or scenarios in which things happen than when you talk about tests. This harks back to the origins of BDD.

Good luck!

","537","","","","","2015-01-05 22:46:07","","","","4","","","","CC BY-SA 3.0" "77158","2","","76890","2011-05-18 09:13:00","","2","","

Many teams strays from the core of agile, and it is your job to bring them back. You need to teach and re-embed agile values into the the team. In fact, you should constantly be teaching agile values. Hold out your vision of agile, make it clear and powerful. Show them your commitment to ""agile done well"".

To do this, walk them through the agile manifesto and Scrum values. Ask them what collaboration means for them and why it is important. Ask them about the role of trust in agile. This is a great time to talk about why there is no team lead role and no project manager role in Scrum and that it is the whole team's responsibility to make great software, not the individual.

Plan an entire retrospective session around this. Get them to commit to some values and follow up during the next retrospective. Don't point fingers, use neutral methods.

Introduce methods that forces the other members to state their opinions safely. Something simple as fist-of-five is great for getting the silent voices in the team heard. It makes it painfully obvious that the team disagrees with the dominant guy. Planning Poker works well but the key is don't allowing any discussions before the cards are shown. Anything that helps getting the others heard without starting conflicts is helpful.

If that goes well, you're all set. Otherwise, talk to him about the problem. Use coaching and ask powerful questions that can help him realize the problem clearly. Try to get to the root cause why he has taken on the dominant role. Maybe he lacks trust in the team (why?) and maybe he feel he is responsible for success (why?). I suspect this role is not something he wants and quite possible he would like it to change. He may come around and realize it.

","5692","","5692","","2011-05-18 12:38:24","2011-05-18 12:38:24","","","","0","","","","CC BY-SA 3.0" "77440","2","","77102","2011-05-19 08:02:48","","2","","

Most of the point are out here in the answers. I'll add some tips.

  • Be present. Not self aware in the introvert sense, but aware of the surrounding and people around you.
  • There are a lot of ice breakers listed above. Choose your favorite and remember them at the back of your mind. Don't think about consciously going out of your way to hunt people to talk. Conversations just happen. If you force yourself to talk to someone it will show. Maybe in your body language or in the tone of your voice and then least to an awkward conversation. Most communication is not verbal. Which brings me to the next point.
  • Be comfy in your own skin. Just like you would be when you are watching TV at home. Be relaxed with a more 'open' body language. (Don't be tensed/ Don't slouch, roll your shoulders backwards/ feet apart).
  • Sometimes making jokes/ complaining in a lighthearted way/ Asking for the time or help are good ways to start conversation. If you are having a good time people will naturally want to talk to you.
  • Once you are on a roll i.e you've talked to lets say 20-30 people in a room. Strangers will come up to you talk. That is just how social dynamics work.
  • There are a lot of tips here, but if your try to do it all you may get confused. Personally I think I would take only a couple of suggestions and try to work on it till it becomes natural. And then move on to other suggestions. Take baby steps. Stick to your strengths instead of trying to improve your weaknesses.

You've worked on some really cool stuff. If you are passionate about something it will show.

To tell you the truth reading book don't really help a lot unless you try it out page by page. Which can take years. Instead go by your natural instincts. Once you gain some momentum you won't need books to make friends. Just be your comfortable self. Don't go meta.

","11428","","","","","2011-05-19 08:02:48","","","","0","","","","CC BY-SA 3.0" "77516","2","","77504","2011-05-19 13:29:04","","2","","

The best argument against hungarian notation, beside modern IDEs, which have plenty of methods to show the type, visibility and more stuff of a variable whith the color, with tiny symbols and with tooltiptexts while hoovering, is, to take it seriously.

  • Encourage more distintcion (b)ool (f)loat (c)har (l)ong (s)hort (conflicts with String? no: (S)tring), (v)oid.
  • Encourage encoding the visibility. I'm from Javaland, and hope it fits for .net as well: (pri)vate, (pub)blic, (pro)tected (def)ault should be useed.
  • .net has final/const? Make it a prefix! Do I hear 'volatile'?
  • Why do int and long need a prefix, but different objects don't? That's not logical. Create an abbrev.tab. where every new Object gets a distinct abbrev.
  • Variables which might be null and such, which never should be null, can be prefixed too. Clever people put the whole DbC into the prefix of a variable.

Seriously: On refactoring, you might change a variable from int to long, from String to char. You shouldn't need to change the name, too.

In IDEs, you get the names often sorted in a Box at the side. sorted by name, where it is easy to find. If most of the variables start with o or i, it is disturbing for the eyes, to get to the significant part of the name.

The additional characters disturb the semantic of the variable. An integer 'sane' gets 'i_sane', which looks more like 'insane'.

The hungarian notation was helpful in languages, which lack a type system. You don't need it, if the compiler enforces specific types. If you decorate your lamento about hungarian notation with an empathical 'yes, for the elder programmers, it made sense in the past to use it!', those elder programmers might be vain, and prefer not to be identified as old.

But you have to be careful, for the technique to work. Maybe you can lower your voice when speaking of 'elder programmers', to let them feel, how careful you are about them, how much they need the care. So that a third person in the room will recognize, that you're trying to hide something, which will of course raise his curiosity.

","16349","","16349","","2011-05-19 13:35:20","2011-05-19 13:35:20","","","","1","","","","CC BY-SA 3.0" "269395","2","","269389","2015-01-08 06:57:08","","9","","

Two very essential things to understand are that:

  • You can never anticipate every change a customer may ask. I had a customer who decided to switch a two months project from PHP to ASP.NET one week before release and was convinced that this would be an easy change.

  • Any change will have a cost. It doesn't matter if you are using Agile or if you have clean and extensible design, the cost will still be there.

Having said that, there are multiple techniques which lead to less expensive changes. As already noted by Jörg W Mittag, this is pretty much just ""good design"", but if you want something more specific, here are some hints:

  1. Avoid code duplication at all costs. Having to make a change in this class, then in this one, and finally there—be careful, the code there is not exactly the same as in the first two locations—would increase the cost.

    The major problem here is that on large projects, developers cannot possibly know which pieces were already written and may develop their own in a different location in the code base. Clean architecture and proper documentation helps, but doesn't make the problem disappear.

  2. Your system should be decoupled as much as possible. If a small change in the module which displays generated invoices on the screen requires to rewrite a few classes in the module which handles registration of new users, there is something wrong with decoupling.

    • Decoupling may be done through interfaces. This means that you can work on the underlying logic while the interface remains the same and other parts of the system are unaffected. For example, you may have an interface for a logger component; when your customer asks to switch from syslog to a RabbitMQ-based message queue, your changes are constrained to the class which implements the given interface, and the users of this interface don't really care where the logs go.

      The major problem here is that interfaces may be leaky. For example, you move from syslog to RabbitMQ, and then notice a new type of exceptions when message queue service is unreachable. Classes using the logging interface should now handle this additional exception.

    • Additional decoupling may be achieved with Dependency injection. The benefit is that instead of working on the class itself, you create a new one, test it separately, and when ready, swap the old class with the new one, while keeping the possibility to go back to the old class seamlessly (either by modifying a single line of code, or through configuration).

    The cons is that this requires more work. If used too much (every part of the application is injected), the design may become too complicated.

  3. Environment matters as much as design. Some hints:

    • Your code should be maintainable. Spaghetti code is problematic not by itself, but specifically because it makes it difficult to modify the code base.

    • Uniform style matters, because it makes the code easier to read; difficult to read code is difficult to change as well.

    • Automated regression testing makes maintenance less stressful. A year ago I worked in a company where the key product had 0% code coverage. When somebody had to make a change, he made it, and then if something bad happened in production, the programmer was the culprit. This is an excellent example of how not to anticipate changes.

      The problem is that tests should be maintained as well, so instead of simply changing code, one should change tests and code. The time spent changing tests is nothing compared to the time wasted because of the lack of proper testing, but inexperienced project managers may not understand that.

    • All members of the team should be aware of all parts of the code base (of the product, not the whole company).

","6605","","","","","2015-01-08 06:57:08","","","","2","","","","CC BY-SA 3.0" "375723","2","","375710","2018-07-23 13:19:56","","2","","

TL;DR I would suggest implementing a combination of the second(PUT /invoices/{id}) and third(PATCH /invoices/{id}/) suggestion.

Let us look at the first part which is the URI. Many standards can exist as to how the uri's can be formed depending upon the guidelines followed by a project or team. However, for simplicity one of the thumb rules that I would prefer to keep is that a given resource end-point should ideally have a matching get end-point' (except if the endpoint ends with an operation name).

To make my point clear, PUT /invoices/{id}/status - this suggestion should only make sense (or make things intuitive for the api consumer) when you also have an equivalent GET /invoices/{id}/status for it, which I believe would not be the case as it will only make your development effort far more cumbersome. If by any chance you do want to selectively expose attributes in 'GET', I would still suggest implementing GraphQL for 'GET' rather than implementing your own standard and going with 'PATCH' for updates. (Also, just a small recommendation, you ideally would want to keep attributes such as status in an invoice as read-only as they represent the state of the invoice and should ideally be changed via some operation-specific endpoints such as POST /invoices/{id}/send rather than through a PUT operation.

Coming towards the other suggestions mentioned, PUT /invoices/{id} and PATCH /invoices/{id}/, the decision is based upon whether the API is exposed internally or externally because otherwise, both are valid HTTP standards.

  • If exposed externally, I would suggest implementing both PUT and PATCH because many of the API consumers might not have relevant support for implementing PATCH or might consider it as a bit too tricky to implement it flawlessly, depending upon the developers they have and in such a scenario you would prefer to give them both the options.
  • If the API is strictly for internal usage, you can in such cases choose to go solely with PATCH to get the benefit of reducing unnecessary payload in your update request.
","303971","","","","","2018-07-23 13:19:56","","","","0","","","","CC BY-SA 4.0" "78346","2","","78333","2011-05-22 20:49:48","","1","","

Personally, I find it IMPOSSIBLE to work in pair with someone, even if I am learning from that person. Maybe it's just so that some people (i.e. me) work better in the more ""classic"" ways (getting into the zone, silence, etc...).

Or maybe it's the fact that XP is mostly implemented within web dev shops in which people wear many hats and instead of solving hard problems in one domain (e.g. optimizing a piece of code), they spend time finding an already existing solution for a problem not very hard intellectually (e.g. integrating a shopping cart onto the page etc.).

For something like this, working in pairs, lots of communication etc. might be the only way to go forward effectively (you're not going to spend X hours just to find that e-mail sending module Joomla!/Droopal bug, are you?)

","5185","","","","","2011-05-22 20:49:48","","","","0","","","","CC BY-SA 3.0" "162105","2","","162102","2012-08-24 02:17:09","","12","","

If you were on a team like this, what would you want your boss to do with his time?

  1. Remove impediments to progress.
  2. Mediate disputes between team members.
  3. Interact with business people so we don't have to.
  4. Keep us informed of that higher level business/project stuff so we don't feel isolated.
  5. Keep us honest, especially if/when a bad apple gets into the team.
  6. Be an advocate for the team to other departments.
  7. Be the unified voice of push-back against unreasonable business requests.
  8. Facilitate communication amongst the team.

There's probably a bunch I'm forgetting, but that's the core of it. Don't implement process, handle some of that overhead/inefficiency that naturally develops as the team size increases.

","51654","","","","","2012-08-24 02:17:09","","","","4","","","","CC BY-SA 3.0" "270414","2","","270369","2015-01-17 17:54:07","","7","","

However, the purpose of this is not to be a package manager, but instead a standard to follow when you want to implement a system that silently, automatically updates your software in the background. I'll look through apt and rpm!

The problem is that what you want to do is building a package manager whether you like it or not.

Your current proposed system seems to revolve around simply downloading installers at some unknown interval, and running them. There is a lot more you'll have to do.

That model depends on each and every package being able to have a silent installer. While having a silent installer is great, that will not work for a great many software packages. You'll either have to ignore them, making the 'standard' pretty much useless, or have to define exceptions that can be channeled through your software for when the user has to make decisions.

Say some package depends on the .NET framework. For Windows, that's going to be a lot of packages. Since the installers won't know whether .NET is installed, they'll have to include it, every time. Are you going to want the system to download that massive redistributable over and over and over? And if you don't, then you'll have to have a way for softwares to indicate that's what they need, so your system can download it for them. And having the initial software won't save you, as it is not uncommon for a given software product to change .net framework versions between 'updates'.

Pretty soon, different packages will need to have different versions, and so your software will have to keep track of all that.

How is your software going to handle updates that can't be done successfully? It is unacceptable for your auto-update to randomly break an existing software package. You'll have to implement a system for handling that, and rolling back the changes. Windows Installer provides some of that, but not all packages use windows installer to install themselves, and Windows installer is obviously not available on any other popular platforms.

I don't think you realize the complexity this task requires if it is going to solve real problems for many people.

","6644","","","","","2015-01-17 17:54:07","","","","3","","","","CC BY-SA 3.0" "162586","2","","162578","2012-08-28 10:27:10","","8","","

Yes, one-to-ones are very important, above and beyond the meetings Scrum dictates.

Daily standups give daily feedback on the state of the project. Iteration planning meetings are specifically for planning the next iteration. Even retrospectives concern what we as a team are doing well or can do better.

Nothing in Scrum encourages a manager to sit down, preferably out of the office environment, with each individual and talk to them. This should be a regular opportunity for stopping small problems from snowballing (because no one likes surprises at review time), but MUCH more importantly it should be a way to get feedback on how you are doing as a manager, what you can do to improve individuals' lives, and what you aren't aware of in the team (perhaps a team member who isn't performing).

The experience you had is not unique, not by any stretch of the imagination.

Rands in Repose did a good article on healthy 1-to-1 cultures a couple of years ago.

The sound that surrounds successful regimen of 1:1s is silence. All of the listening, questioning, and discussion that happens during a 1:1 is managerial preventative maintenance. You’ll see when interest in a project begins to wane and take action before it becomes job dissatisfaction. You’ll hear about tension between two employees and moderate a discussion before it becomes a yelling match in a meeting. Your reward for a culture of healthy 1:1s is a distinct lack of drama.

","12828","","12828","","2012-08-28 10:44:31","2012-08-28 10:44:31","","","","1","","","","CC BY-SA 3.0" "271499","1","271504","","2015-01-29 08:22:40","","5","2095","

I want to test a method which is not as much as a unit, because it is more of a 'orchestrator' / 'process' / 'controller' / 'coordination' class.

This is the case:

I have four unit tested classes:

  • One is a data service which can read/write data from the database
  • Second is a textservice which can create content for emails / messages etc.
  • Third is a mailservice that can send email
  • Fourth is a class dat can create tasks for users in our system (tasks are things they should do)

Now I created a new class, which sends an email all people which are late with paying an invoice. It reads data with the data service, creates the appropriate text with the textservice, sends an email with the email service, writes the new invoice status with the data service and creates a task when the emailing for a specific invoice fails.

Now this new class is my 'orchestrator' or 'process' or 'controller' or 'coordination' class.

I have these kind of classes a lot in our application because we try to make our classes (like the data/email/textservice) as small as possible so when 'work has to be done', like in this case the 'mail all people which are late with paying', we create a new 'orchestrator' or 'process' or 'controller' or 'coordination' class.

I think I have these kind of classes for the most of my actions in my webcontrollers because most input sent from the browser involves coordination between multiple (smaller) classes

Now how do I test these classes / methods?

I used to mock all 4 classes in my test and verify at the end that the classes are called, and in the right order.

But more and more I read that you shoud not do this, because then you test for the internal working of a method, and when you refactor that method, the test fails. So I should test for results, not for inner working. But this method is void, so there is nut much of a result to verify on. The only thing I can think for instance to check for is: is email sent? But the only way to check that is to verify the email service is called, but then I'm back at testing the internals.

I don't see these kind of examples in the unit testing / tdd books, because they most of the time only work with the small classes like the calculator class, but rarely do I see examples for 'orchestrator' classes like I'm describing, but which occur a lot in my code.

For those who think it's a duplicate: I think the answers here provide much more background than the one answer at the other question. That other question was answered with a integration test in mind, and my question is about unit testing, not integration testing. So I can't agree with the duplicate answer mark.

","23264","","23264","","2015-01-30 13:09:17","2015-01-30 13:09:17","How to test a method which is not as much as a unit, because it is more of a 'orchestrator' / 'process' / 'controller' / 'coordination' class","","4","1","","2015-01-30 11:44:05","","CC BY-SA 3.0" "80676","2","","80529","2011-05-31 23:44:55","","4","","

All answers provide already very good advice, but here's a quick list of things that come to mind:

Your Immediate Evaluation Checklist

  • suggest to the project manager to do a project retrospective
    • this will help have everyone sit together and see issues (including the obvious poor time management, which isn't your fault but your manager's!)
    • this will help identify things that were done right (including, hopefully, your work)
  • ask for feedback from your boss, the project manager and the dev lead:
    • first in writing via a polite email (to use the proper channels),
    • and right after sending it walk up to them (individually or not) and ask directly (to really get this feedback)
    • do this politely, and without being whiny. No one likes a junior who gives the impression is thinks he's better than anyone else and knows all the best practices and wants to turn things around and be praised for every single thing.
  • ask yourself wether your actions were really up to the standard that you estimate here:

    • Did you implement everything?
    • ... with the right level of quality?
    • ... with the proper documentation, communication and tracking?
    • ... with the proper hand-over to the maintainers?
    • ... while following processes accordingly?
    • What can you do better?
    • What have you learned? (if you learned something, maybe you did a few things imperfectly at first)

    (I'm saying all this because it would hurt it you realize later that, actually, other people had to look over your shoulder for a while, and that instead of deserving praise, they were nice enough to not point out all the mistakes you may have done.)

Your Regular Evaluation Checklist

  • Keep track of what you do
    • support calls (log them in the issue tracker or adequate system)
    • bugfixes (log them in the issue tracker or adequate system)
    • enhancements (log them in the issue tracker or adequate system)
  • Keep track of what you think can be improved (in your work, or overall)
  • Keep track of what you want to be doing in the next review period

When I have had personal performance reviews (scheduled by the company or initiated by me), I always come prepared with all these, to clearly show with what I'm happy or disatisfied.

In this review, do mention your objectives, both as part of the team and personally. What's your personal development plan? Do hint at the fact that you want more (money, responsibilities, appreciation) and that you may have other options (but do it without being threatening; it's not easy, but important for good relations if you decide to stay).

Clearly voice your disapproval about being refused the right to attend family matters.

Your Company Checklist

  • How many times did you work late? Over the week-end? Over lunch?
  • Does this happen often? Is this an exception? Is this only you?
  • Do you have a career prospect here? Do you have a clear path to a promotion or a raise, or the perspective of a reward of some kind for your actions now, or in the future?
  • Do you enjoy coming to work?
  • Do you enjoy your work? Your workplace? Your working conditions?
  • Do you enjoy the industry and the outcomes or your work?
  • Do you like the people with whom you work?
  • Do you feel like you are improving personally, technically and humanly, over time?
  • Do you feel like you get respect for what you do?

If a majority of these don't add up, don't look back, except if you're ready to bite the bullet for a while and then dash-out once you've got enough experience or money, or found something else.

Your ""Next-Steps"" Checklist

  • update your resume (Don't do it at work or advertise that you look for other things openly, but don't be afraid to have it published and visible online: if they see you update it, they will realize you might leave and will think about what that would mean for them),
  • build up your portfolio (open source projects, personal projects, etc...)
  • brush up your interview skills,
  • brush up on new technologies or things you want to work with.

A Note About Honesty

I have been wondering, is being honest/dedicated to the job what resulted in this situation?

Clearly: no. Honesty never results in this. Most likely, is found its roots in either:

  • your office's lack of judgement (in appreciating and rewaring your work) and honesty (in seeing it)
  • your own unconscious lack of honesty (in estimating the quality of your work).

It's either one or the other. I'd bet on the first one. However, I mention the second one because I don't know you (and if I don't see it, then it didn't happen :)), and I've seen (so, it happened) incredibly useless people in software (or other) teams who were convinced they were a gift to the company, while they only burdened others.

But honesty, if it's there, cannot be at cause, obviously.

","3631","","3631","","2011-06-01 08:33:17","2011-06-01 08:33:17","","","","2","","","2011-06-04 19:09:54","CC BY-SA 3.0" "272039","2","","271991","2015-02-04 02:35:50","","2","","

Whereever I have worked in software development, there was always an equivalent of a backlog, always a list of bug reports, and always someone responsible for the priorities (an equivalent of a PO), so your question, though it uses Scrum terms, is IMHO far from beeing restricted to Scrum.

A quick google search for ""scrum backlog bugs"" revealed that some teams separate bugs / issues from ""new feature"" stories, other's don't. Sometimes issue trackers are used, sometimes a Wiki, sometimes everything goes directly into the backlog, etc., and there is no consensus ""which is better"". So the first thing you should clarify in your team how you want to handle this, what does your team want to see in the backlog and what not, and what does your team think will work best for your case.

If you made the experience that if everyone is allowed to add anything to the backlog messes its up, it will probably better to separate user stories from bugs. The requirements how a good bug report should look like are probably different from the requirements how a good user story for a new feature should look like, which might be another reason, too. Nevertheless both will end up in tasks for the developers, which might be a reason to keep them in one place.

However, for most products it makes sense when bug reports can be given by anyone (users, testers, developers, marketing people, whoever notices a potential problem). ""New features"" should probably be discussed with the PO as part of the process when adding them to the backlog. Your PO should decide if he wants to put the stories into there by himself to do some editorial work beforehand, or if anyone can put the stories to the backlog and he does the editorial work afterwards. But for bugs, especially the ones which look severe to the reporter, there should be no discussion needed to add them to the issue tracker, or backlog or buglist document, or whereever you keep them. ""No discussion"" does not mean to put the bug report anywhere silently, quite the opposite. When a bug report is added, and the reporter thinks it is probably not a minor one, the PO (or whoever is responsible) must get informed.

As a final note: to make the right decision for your team how to manage this, it makes quite a difference what size your product has, how many people are going to report bugs and new stories, and if you get one but report per week, a dozen or several hundred.

","9113","","9113","","2015-02-04 12:01:47","2015-02-04 12:01:47","","","","0","","","","CC BY-SA 3.0" "164357","2","","152094","2012-09-10 14:04:41","","7","","

I'm not a good source of objective information, but subjectively:

  • I don't want my system to die, ever; to me it's a sign of badly designed or incomplete system; complete system should handle all possible cases and never get into unexpected state; to achieve this I need to be explicit in modeling all the flows and situations; using nulls is not anyhow explicit and groups a lot of different cases under one umrella; I prefer NonExistingUser object to null, I prefer NaN to null, ... such approach allows me to be explicit about those situations and their handling in as much details as I want, while null leaves me with only null option; in such environment like Java any object can be null so why would you prefer to hide in that generic pool of cases also your specific case?
  • null implementations have drastically different behavior than object and are mostly glued to the set of all objects (why?why?why?); to me such drastical difference seems like a result of design of nulls to be indication of error in which case system most likely will die (I'm supprised so many people actually prefer their system to die in the above posts) unless explicitly handled; to me that's a flawed approach in it's root - it allows the system to die by default unless you explicitly have taken care about it, that's not very safe right? but in any case it makes impossible to write a code that treats the value null and object in same manner - only few basic built in operators (assign, equal, param, etc.) will work, all other code will just fail and you need two completely different paths;
  • using Null Object scales better - you can put as much information and structure into it as you want; NonExistingUser is a very good example - it can contain an email of the user you tried to recieve and suggest to create new user based on that information, while with the null solution I would need to think how to keep the email that people attempted to access close to the null result handling;

In environment like Java or C# it won't give you the main benefit of explicitness - safety of solution being complete, cause you can still recieve null in the place of any object in the system and Java has no means (except custom annotations for Beans, etc.) to guard you from that except explicit ifs. But in the system free of null implementaitons, approaching problem solution it in such a way (with objects representing exceptional cases) gives all the benefits mentioned. So try to read about something other than mainstream languages, and you'll find yourself changing your ways of coding.

In general Null/Error objects/return types is considered to be very solid error handling strategy and quite mathematically sound, while exceptions handling strategy of Java or C goes like only one step above ""die ASAP"" strategy - so basically leave all the burden of unstability and unpredictability of the system on developer or maintainer.

There are also monads that can be considered superior to this strategy (also superior in complexity of understanding at first), but those are more about the cases when you need to model external aspects of type, while Null Object models internal aspects. So NonExistingUser is probably better modeled as monad maybe_existing.

And lastly I don't think that there is any connection between Null Object pattern and silently handling error situations. It should be other way around - it should force you to model each and every error case explicitly and handle it accordingly. Now of course you might decide to be sloppy (for what ever reason) and skip handling the error case explicitly but handle it generically - well that was your explicit choice.

","63958","","","","","2012-09-10 14:04:41","","","","1","","","","CC BY-SA 3.0" "81518","2","","81494","2011-06-04 10:39:23","","3","","

As Dean said, you are being assessed on multiple attributes, and these are usually:

  • Technical Skills
  • Whether you would fit into the team
  • Thought process
  • etc.

The technical skills requried for the role will differ depending on which team you are interviewing with, so if it doesn't work out with one team, you could (depending on the company) re-apply and find a better fit with another team. So don't lose hope!

The majority of technical skills are usually tested with coding problems. You mentioned that occaisionally you missed a border case and that a few bugs crept in (as they inevitably do when asked to code on a whiteboard). A good approach to answering these coding questions is to do the following:

  • Understand what is being asked (ask to repeat certain parts if necessary)
  • Ask clarifying questions (iteratively/recursively, Do specific constraints exist?, which language?, etc)
  • Identify appropriate data structures, algorithms, design patterns that may be used (Programming interviews exposed and Programming Pearls are helpful for this)
  • Write the code, whilst explaining out loud to the interview what your thought process is. If the interviewer knows what you are thinking, they may be able to identify problems in your approach early, and guide you towards a better solution.
  • Before telling the interviewer that you are complete, think and explain to the interviewer how you would test the software you just wrote. Think about simple cases, border cases, concurrency, whether the approach makes sense for other cultures, security implications, stress testing, etc.

Finally admitting that you don't know something is (IMHO) preferable to stumbling along trying to fake it. Granted, the interview is asking you to solve a problem, but if you don't know where to start, I'd recommend talking about the valid approaches and trying to narrow dow a correct one that addresses the contraints given. If you have no idea where to start, it may be time to explain that (This also ties into how you fit into the team. I'd say that it is better to ask for direction early). So I don't think that saying you don't know is a bad thing (assuming that it isn't all that is said =])

There isn't specifically much that you can do about fit, as often to comes down to a personal opinion of the interviewer, but conversing with the interviewer about what you're thinking/doing is preferable to coding in silence for 15min and then declaring ""I'm finished"".

Keep in mind that these things are usually a two way interview. They are not only interviewing you, you are also interviewing them. Feel free to ask questions about the job/team/company.

Finally, Microsoft recruiters post quite a fair amount of info on what they are looking for during a phone screen/interview so I'd recommed having a read. Additionally GlassDoor has a lot of info on interview processes for companies (but the user submitted answers aren't always correct). A google search for MS/Google/Amazon/Apple/etc interview questions will also yield results.

Good Luck.

","25039","","25039","","2011-06-04 10:47:14","2011-06-04 10:47:14","","","","0","","","","CC BY-SA 3.0" "164552","1","164574","","2012-09-11 20:37:13","","4","277","

I work for a small development company, 20 people total in the entire company, 3 in actual development, and we've adopted CD for our commits to trunk, and it works great, from a code management and up-time side. However - we're getting flak from our support staff and marketing department that they don't feel that they're getting enough lead time on new features and notifications on bug fixes that could change behavior. Part of why we love the CD system is for us in development, it's fast, we fix the bug, add the quick feature, close the Bugz and move on with our day to the next item.

All members of our company are now on HipChat at all times, and when a deployment occurs, a message is sent to a room that all company members are in, letting them know what was just deployed (it just shows the commit messages from tip back to the last recorded deployment). We in development are also attempting to make sure that when we're making a change that modifies the UI or a public facing behavior, we post a screenshot to the All Company room and explain what the behavior change is, seeking pushback or concerns. Often, the response is silence. Sometimes, it's a few minor questions, but nothing that need stop the deployment from happening.

What I'm wondering is how do other users of the CD method deal with notifications of new features and changes to areas of the company that are not development - and eventually on to customers in the world?

Thanks,
Francis

","64055","","","","","2012-09-12 02:01:52","How to communicate within a company what is being Continually Deployed","","2","5","1","","","CC BY-SA 3.0" "81946","2","","81919","2011-06-06 15:46:35","","3","","

This is a very tough problem to solve as these kinds of decisions can be made by a multitude of different factors. I am going to attempt to answer it to the best of my ability.

Many times these bloated and expensive enterprise suites are purchased with minimal involvement by technically oriented personel on the team. Sales demonstrations and demo apps are tools to sell the idea to the functional managers and stakeholders of the project or company.

This problem occurs when the functional managers are given too much power or voice, or technical personel are not given enough. This comes full circle to my previous comment that this may be purposeful but that is another topic of discussion.

Technical involvement and evaluation COULD minimize this problem.

In regards to your idea about a good evaluation site of commercial software it would be EXTREMELY HARD to set such a site up in an UNBIASED or pure way.

Technical evaluations that PRAISE one product over another are worth a LOT of money and an influx of money by concerned parties would put pressure and strain on such a site to stay biased. As if this weren't enough, there are a number of different online services that hire people to put up fake blogs and fake ""grassroots"" product evaluations that makes identifying TRUELY UNBIASED ratings hard to do. Pay-per-post companies hire people to flood artificial popular opinion on a number of different sites as well.

And yet still, ANOTHER problem you would have to contend with are potential legal liabilities and challenges from companies that own products that received BAD reviews, claiming slanderous falsehoods and loss of business. Many companies (cough Computer Associates) have armies of lawyers that do just that.

These are probably a number of issues why such a site would be hard to run.

","25476","","","","","2011-06-06 15:46:35","","","","3","","","","CC BY-SA 3.0" "378567","1","","","2018-09-17 14:05:47","","3","1044","

We have a REST API with an endpoint accepting JSON data from the client. One of the JSON fields is a URL that will be rendrered to other users as a hyperlink to a website page associated to the resource. Somewhere in the pipeline we needed to ensure the URL is valid (starts with http(s)://, contains a domain from a whitelist, etc.).

So we designed the API such that it would accept only valid URLS, and return an error (400) when the URL is considered invalid. On the UI side, the user has to correct the URL until it is valid, the error message adapting to the error case (missing value, invalid domain, invalid format...).

Our product owner tested our implementation before going live, and had trouble with this simple approach. He typed in ""facebook.com/foobar"" and was expecting the URL to be valid. The error message was something along the lines of ""Please enter a valid URL like https://www.example.com/xxxx"". He was expecting (quite rightly) that the input field would accept anything a browser address bar would accept. The error message could have been clearer (""the URL should start with http(s)://""), but we agreed he was right and that in this case, user input should be fixed by our application before being saved.

Here we had 2 ideas:

  • Either let the API correct the URL (prepend a default protocol) upon saving ;
  • Or prepend the protocol on the client side, and don't touch the API validation.

I have a strong preference for the client-side method, because I believe a REST API should never alter user input silently (you never know what kind of client will consume your API, and silently modifying user input could have unexpected side-effects). The problem is I couldn't find any real-life example to back up my point of view.

On the contrary, one of my teammates (the one responsible for the fix) couldn't find any good reason to prefer one method over the other, and went for the API fix (mainly because it's much faster to implement, and you don't have to implement this behavior for each client using your API).

What do you think?

","311273","","311273","","2018-09-21 09:53:26","2018-10-02 09:28:57","In a REST API, should you correct user input on server side?","","5","1","1","","","CC BY-SA 4.0" "378974","2","","378970","2018-09-25 14:01:02","","1","","

That would not be a bussiness rule.

Business rules should be collected, and ideally be referred to in the source code.

They should describe the logic with respect to all business aspects: after printing an invoice, the invoice number cannot be changed anymore, and has to be put in the audit log.

Now one can have a business rule: send a notification on this ... event. The implementing code may refer to this business rule, and state that if needed it splits a too large message in two parts separately sent. That does not look like a requirement on beforehand, but an implementation documentation afterwards. A bit like ""one can click on the i to receive contextual online-help.""

If one only has business rules, fine, add the implementation details. Otherwise keep them separate, keep the business rules' ownership neutral and without too much fluff: the business people should not feel overrestricted by imposed details/""corrections."" It is like pagination of lists and such. ""The results must come in limited pages, though an entire scrollable list must be selectable too."" That is - as you said - something for application design. And - in contrast to decisive business rules - tells something on the interna.

Having two messages must be explained. But let me give a comparison:

An architect tells: for a building for N people and M floors there are K lifts. The building owner will want to have the technical documentations: how the elevators intelligent wait on first and top floor, what strategy to respond to a button press and such. Important technical implementation details, intelligent design decisions. Sending two notification messages falls in the same category.

In the business rule ""notification message"" needs to be changed in ""notification as one or two messages (foot note: if the notification becomes too long)"" but technical justification and details should go elsewhere.

Now the implementation may be changed without the business rules being effected much (those rules will deal with a ""notification"", not a single partial ""message"").

","98400","","","","","2018-09-25 14:01:02","","","","4","","","","CC BY-SA 4.0" "165489","2","","18116","2012-09-20 08:49:56","","2","","

My recent experience with Elance in particular, seems to suggest that there are a lot of developers based in Asia who are prepared to undercut western developers to get the work, but they all seem to target the popular technologies, such as PHP and C# development. As soon as you move into less popular technologies you start to find fewer people bidding, and those that do are prepared to pay much more.

A friend has used Elance a number of times to find work. One job was testing applications to ensure they were suitable for use by children, and she was being paid more for this than she earns in her day job.

I have posted a few jobs on Elance expecting to be flooded with cheap offers of $5 an hour, but received no such thing. These were jobs involving web design and an iPhone application involving voice communications. In the first case I received offers far higher than I was prepared to pay, and in the second I received no bids at all.

In both cases I would have been happy to except a reasonable offer from a quality developer rather than the cheapest offer, but in both cases I simply was not receiving these.

I think you can make money freelancing on Elance, you just need to specialise on a less popular field so that you are not competing directly with the Asian developers.

","27208","","","","","2012-09-20 08:49:56","","","","1","","","","CC BY-SA 3.0" "83142","2","","83091","2011-06-10 20:55:48","","8","","

I'm going to do my best to cut through the confusion in the question.

First of all, ""Data Object"" is not a meaningful term. If the only defining characteristic of this object is that it doesn't have methods, then it shouldn't exist at all. A useful behaviour-less object should fit into at least one of the following subcategories:

  • Value Objects or ""records"" have no identity at all. They should be value types, with copy-on-reference semantics, assuming the environment supports it. Since these are fixed structures, a VO should only ever be a primitive type or a fixed sequence of primitives. Therefore, a VO shouldn't have any dependencies or associations; any non-default constructor would exist solely for the purpose of initializing the value, i.e. because it can't be expressed as a literal.

  • Data Transfer Objects are often mistakenly confused with value objects. DTOs do have identities, or at least they can. The sole purpose of a DTO is to facilitate the flow of information from one domain to another. They never have ""dependencies"". They may have associations (i.e. to an array or collection) but most people prefer to make them flat. Basically, they're analogous to rows in the output of a database query; they're transient objects that usually need to be persisted or serialized, and therefore can't reference any abstract types, as this would make them unusable.

  • Finally, Data Access Objects provide a wrapper or façade to a database of some kind. These obviously do have dependencies - they depend on the database connection and/or persistence components. However, their dependencies are almost always externally managed and totally invisible to callers. In the Active Record pattern it's the framework that manages everything through configuration; in older (ancient by today's standards) DAO models you could only ever construct these via the container. If I saw one of these with constructor injection, I would be very, very worried.

You may also be thinking of a entity object or ""business object"", and in this case you do want to support dependency injection, but not in the way that you think or for the reasons you think. It's not for the benefit of user code, it's for the benefit of an entity manager or ORM, which will silently inject a proxy which it intercepts to do fancy things like query comprehension or lazy loading.

In these, you usually don't provide a constructor for injection; instead, you just need to make the property virtual and use an abstract type (e.g. IList<T> instead of List<T>). The rest happens behind the scenes, and nobody's the wiser.

So all in all I would say that a visible DI pattern being applied to a ""data object"" is unnecessary and probably even a red flag; but in large part, that is because the very existence of the object is a red flag, except in the case when it is specifically being used to represent data from a database. In almost every other case it's a code smell, typically the beginnings of an Anemic Domain Model or at the very least a Poltergeist.

To reiterate:

  1. Don't create ""data objects"".
  2. If you must create a ""data object"", then make sure it has a clearly defined purpose. That purpose will tell you whether or not DI is appropriate. It's impossible to make any meaningful design decisions about an object that shouldn't exist in the first place.

Fin.

","3249","","","","","2011-06-10 20:55:48","","","","0","","","","CC BY-SA 3.0" "165944","2","","165876","2012-09-23 23:53:40","","2","","

It would be nice to think there is a way to resolve this, but you'll find that as a very small supplier to a large company, they pretty much dictate terms. I've been in this situation for a long time with a huge multinational, and we have to learn to roll with their way of thinking.

The most important mitigation is to have good relationships with the people involved, especially the decision makers. Then you can gently influence, and if things get ugly, they will speak up for you.

Of course you can make sure you have good contract/legal terms that protect you, but they won't help much if the customer really decides they don't like you and is going to make life difficult. Which is why the personal relationships are so important.

","65963","","","","","2012-09-23 23:53:40","","","","0","","","","CC BY-SA 3.0" "83233","1","83241","","2011-06-11 13:27:49","","19","854","

In the answers of What's the canonical retort to "it's open source, submit a patch"?, many people voiced the opinion that simply asking people to submit a patch is arrogant and rude.

But it seems to me that as a developer on any open source project, you will see many more feature requests on the mailing list than you could possibly implement. So when a user says, ""I would like to see feature X"", the truth of the matter is usually that the chances of it getting implemented are pretty slim unless they submit a patch themselves. Also, sometimes a little encouragement might be all that's needed to turn a user into a contributor.

On the other hand, you don't want to scare (potential) contributors away by coming off as rude.

So how would you say ""please submit patches instead of asking for features"" in a friendly manner?

Update: Thanks for all the suggestions! I see most of them require pretty lengthy explanations. But since I'd rather avoid either (a) explaining the same thing every other day (it just takes too much time), or (b) using snippets that I paste into email (it gets impersonal real quick), I wonder: Has anyone written this up in a document that I can link to?

(Project-specific things like how to write tests, compile the code, and submit the patch still need to be documented of course, but I think those technical issues should go into CONTRIBUTING.txt anyway.)

","12173","","-1","","2017-04-12 07:31:27","2011-07-08 13:52:03","How do you phrase “it's open source, submit a patch” so that it's friendly?","","8","4","2","","","CC BY-SA 3.0" "166043","2","","166037","2012-09-24 23:53:56","","3","","

Being the sole voice of change is never an easy thing. The first challenge is usually to get other people on board with you (Hopefully there are some in your company who are more open minded than the rest). If you can grab the attention of a couple other senior developers/managers to join your crusade, those combined voices will be ever harder for the rest to ignore.

I find a good way to explain to people is through providing concrete examples of past mistakes made in projects which they have been actively involved in (e.g. ""Have you ever tried to debug this thing? It's been like this for years and noone knows how to fix it..""). Better still, applying those 'new' design ideas and principles to a small/trial project and being able to show off how successful it's been at the end.

Depending on the attitudes of others in your company, change may come quickly, or it may be like trying to swim through treacle. Some managers are completely immovable unless they have solid statistics/proof in front of them, where others are quite keen to be keeping up with the latest thinking.

You might need to try different tactics depending on who you're dealing with; Suggesting that money/time/effort might be saved and quality might be improved is a good way to grab the attention of non-technical managers; and the promise of easy automated testing appeals to quite a lot of developers who usually can't stand spending days running through the same test spreadsheets repeatedly.

","51489","","","","","2012-09-24 23:53:56","","","","0","","","","CC BY-SA 3.0" "166119","2","","166104","2012-09-25 14:08:40","","10","","

I'd like to add my voice to those who recommend one backlog per product. Creating another backlog is a rational response, but is really just avoiding the core issue: Why won't the Product Owner prioritise technical items over feature items? You should focus on solving this rather than working around it. You could use the 5 Whys technique, for example, to try to get to the bottom of things.

There could be many reasons why the PO doesn't prioritise technical issues. For example, maybe the tech team isn't explaining the long-term cost (in $$$) of not addressing the technical debt. Maybe it's something else completely. There's a good chance it's down to a communication issue, and the long-term solution is to work on it and resolve it -- remove the impediment.

Additionally, I have an another question for you to think about: Why has the technical debt arisen in the first place? Ideally work such as refactoring etc. should happen within the functional stories and be completed within the sprint. They shouldn't be extra stories in their own right otherwise you don't have potentially shippable code.

","66124","","","","","2012-09-25 14:08:40","","","","0","","","","CC BY-SA 3.0" "273866","1","","","2015-02-20 14:56:42","","4","1089","

I'm trying to design a relatively simple ERP system. However, there are some requirements that complicate things a little bit:

  1. It must be possible to add all sorts of contacts to the people table, including clients and co-workers.
  2. It must be possible to assign a user to a contact, so users can access their schedules and stuff.
  3. It must be possible for users to be assigned to multiple customers, when for instance a user works for several organisations.
  4. It must be possible for different organisations to have different contact details for one user.
  5. When — in the future — a project management functionality is added, it must be possible to share projects between organisations.

I came up with this simple data model:

As you can see, there is some data duplication between tables.

Should I just just get rid of the customer's organisation name, and retrieve that from the customer's contact field instead? And yes, the customer's contact is the person that receives invoices and such from us. Is this a good design decision or should I not use the people table for this?

The user's name is a duplication of the contact's name, but I don't think this is avoidable? I don't want to tie the user's details to the contact's details, see point 4.

Again, this is just a very simple 'mockup' to visualise things, but what kind of improvements can I make to this model? Is there a more elegant way?

","168722","","168722","","2015-02-20 15:12:09","2015-02-22 12:07:28","Should I avoid data duplication?","","1","7","","","","CC BY-SA 3.0" "83799","2","","83797","2011-06-14 13:53:30","","49","","

I tend to delete comments in code. And by delete, I mean, with prejudice. Unless a comment explains why a particular function does something, it goes away. Bye bye. Do not pass go.

So it shouldn't surprise you that I would also delete those changelogs, for the very same reason.

The problem with commented out code and comments that read like books is that you don't really know how relevant it is and it gives you a false sense of understanding as to what the code actually does.

It sounds like your team doesn't have good tooling around your Version control system. Since you said you're using Subversion, I'd like to point out that there's a lot of tooling that will help you manage your subversion repository. From the ability to navigate your source through the web, to linking your changesets to specific bugs, you can do a lot that mitigates the need for these 'changelogs'.

I've had plenty of people comment and say that perhaps I'm in error for deleting comments. The vast majority of code I've seen that's been commented has been bad code, and the comments have only obsfucated the problem. In fact, if I ever comment code, you can be assured that I'm asking for forgiveness from the maintanence programmer because I'm relatively certain they'll want to kill me.

But lest you think I say that comments should be deleted in jest, this Daily WTF submission (from a codebase I worked on) illustrates my point perfectly:

/// The GaidenCommand is a specialized Command for use by the
/// CommandManager.
///
/// Note, the word ""gaiden"" is Japanese and means ""side story"",
/// see ""http://en.wikipedia.org/wiki/Gaiden"".
/// Why did I call this ""GaidenCommand""? Because it's very similar to
/// a regular Command, but it serves the CommandManager in a different
/// way, and it is not the same as a regular ""Command"". Also
/// ""CommandManagerCommand"" is far too long to write. I also toyed with
/// calling this the ""AlephCommand"", Aleph being a silent Hebrew
/// letter, but Gaiden sounded better.

Oh... The stories I could tell you about that codebase, and I would, except that it's still in use by one of the largest government organizations around.

","1577","","1577","","2012-01-17 16:39:36","2012-01-17 16:39:36","","","","14","","","","CC BY-SA 3.0" "166465","2","","166461","2012-09-27 17:12:53","","23","","

No, it is not ""quicker"": compilers will translate both expressions into the same code.

Some time ago the first pattern has been suggested to people coming to C from other languages where comparing objects required a single =. The idea was to protect them from making this mistake:

if (myVariable = 100)

This is legal, but it assigns 100 to myVariable instead of comparing myVariable to 100. If you make it a habit to put 100 ahead of myVariable, the compiler will trigger an error, because

if (100 = myVariable)

is illegal.

Modern compilers issue warnings when they see an assignment in place of an equality check ==. You can silence the warning in cases when you do want to use an assignment inside an if by adding a second set of parentheses around your assignment expression.

Moreover, the construct is not useful in C# at all, because if (myVariable = 100) is not legal.

","44705","","","","","2012-09-27 17:12:53","","","","1","","","","CC BY-SA 3.0" "84144","1","84147","","2011-06-15 10:41:09","","3","990","

I am currently working on a money tracking/invoice creation app that I intend to release for free. The app can be broken down to three parts:

  • The Framework, a generic, all-purpose collection of classes (php/mySQL)
  • The app itself (php/javascript)
  • The design (images)

I am trying to find licenses that fit three different purposes:

  • I want to release the framework under a license that specifies that
    • The framework is open-source, free, and cannot be sold
    • However, the framework can be used in commercial products, as long as no author names are removed from the code and the framework's source is available (a link to my sourceforge in the about page will do...Even little, hidden in a subpage, or in the FAQ, as long as people really looking for it can find it).
    • Code that uses my framework doesn't have to be open-sourced. I don't want to stop people from releasing non open-source, commercial products. Too many times I have been blocked by this when working for a client, I don't want to inflict the same problems on the community. Furthermore, I will surely use my framework myself for closed-source projects for clients.
  • I want to release the app part under an open-source, free license that disallows any attempt to sell it (but allows forks, as long as they stay open-source and free)
  • I want to release the design (icons, backgrounds) under a free license for non-commercial projects only.

Additionally, If it is possible (if such a license exists), I would like to remove all constraints, even for commercial products, as long as the project is led by a one-man (or a one-woman) team. In other words, I'd like freelancers to be able to fully enjoy complete freedom, but have some restrictions for companies.

It might be worth mentioning that although the framework is totally custom code, the app will contain some third-party, namely jquery, and maybe some other javascript components.

I am aware this is a very specific question that doesn't necessarily helps the coding community, just me, but I don't know where to turn to.

","25092","","25092","","2011-06-17 11:25:38","2011-06-17 11:54:41","Mix three different licenses for an open-source software","","2","8","","","","CC BY-SA 3.0" "84446","1","","","2011-06-16 11:21:19","","3","273","

This includes everything that contributes to the success of the app and is non-programming.

Some points to consider:

  • How do you make people notice your app when there are hundreds released every week? How to get an article published in a mobile app blog about your app? Should you buy ads? What about a press release?
  • How to shoot a promotional video? What material is good? Should you go for animation or real life footage?
  • How to make a project website for the app? Should it be simple in design? What about feedback forum? Should you use standard forum like, uservoice.com?
  • To which marketplace should you release the app? Should you stick to one or as many as you can?
  • Should you wait with releasing until you have all these handled or should you do it immediately? Is there a good time to release your app?
","22695","","","","","2017-12-26 12:07:51","How to release a mobile app successfully?","","1","0","2","","","CC BY-SA 3.0" "85383","2","","85372","2011-06-20 02:44:40","","1","","

I honestly don't see what use questions like this can possibly give an interviewer. I suppose if you where interviewing an intern and wanted to be sure they'd had, and hadn't flunked their data structures class, sure...but I just don't see how these questions actually answer anything useful. Being able to regurgitate a linked list implementation on request does not a programmer make, and a person who cannot is not necessarily clueless either.

On the other hand, asking about how to USE these structures can be beneficial. That's what a programmer is actually going to do anyway. If you know the basic components of your data structures then you can better gage which is useful and where. Then you can take your time making your own implementation if it's actually necessary, with the aid of unit tests and a compiler (rather than a whiteboard).

Some interviewers claim that it helps them gauge how a person thinks. They expect you to talk about your process of solving the problem out loud. I see two problems with this:

1) It's not an interesting problem. There's nothing to think about.

2) Many people are derailed when trying to ""think"" out loud. The amount of brain centers that have to be involved in that process are much greater than just solving the problem. Some people are helped by this, other (and they may be very good too) people end up getting too distracted. This doesn't mean they're bad communicators either, just that they're the type that sit there in silence solving the problem and THEN proposing the solution they come up with.

Anyway, rant over.. I would say that if the interviewer asked you to solve a problem that involves a data structure that you should feel free to use one. I'd go so far as to propose that you can simply make up your own data structure API and just say (this does the basic XXXX operation). If they specifically ask you to write some part of a data structure though then of course you've run into one of THOSE people and will simply have to do it.

","9293","","47","","2011-06-20 11:11:54","2011-06-20 11:11:54","","","","7","","","","CC BY-SA 3.0" "167922","1","","","2012-10-08 05:02:24","","17","1877","

In particular, I'm curious about the following aspects:

  1. How do you know that your test cases are wrong (or out-of-date) and needed to be repaired (or discarded)? I mean, even if a test case became invalid, it might still pass and remain silent, which could let you falsely believe that your software works okay. So how do you realize such problems of your test suite?

  2. How do you know that your test suite is no longer sufficient and that new test cases should be added? I guess this has something to do with the requirement changes, but is there any systematic approach to check the adequacy of test suite?

","21021","","20065","","2012-10-08 08:29:04","2012-10-08 08:29:04","How do people maintain their test suite?","","2","2","3","","","CC BY-SA 3.0" "85895","2","","85864","2011-06-21 17:11:11","","1","","

Ask questions and listen to the answers. Think about the answers to previous questions before you ask the next one so that you can try to anticipate an answer.

Strive to do the very best work you possibly can. Get used to asking yourself what someone else on the team will think of your code if they have to make a change to it next month.

If you see a problem that needs to be addressed, do your best to have a reasonable solution ready to offer before voicing concern over the problem. Take ownership of implementing a solution when you point out a problem.

","28559","","","","","2011-06-21 17:11:11","","","","0","","","","CC BY-SA 3.0" "168116","2","","168097","2012-10-09 10:19:08","","1","","

Your question hits a lot of people very squarely where they live, in a place that can be pretty painful. Bug is not a very precise technical term, but it certainly has a lot of emotional baggage.

Where you work, do people consider features to be planned improvements, and bug fixes to be unplanned improvements? When a developer created code with bugs under the crunch of an all too short schedule, is he or she called back to be accountable for work that might not have been as thorough as needed? As the expert on the code, do people give kudos for a rapid and responsive solution? Do habitual under estimators have consequences after taking credit for features and for being fast, even though finding and fixing their bugs was very frustrating for testers and customers, and very time consuming for developers working on maintenance?

Of course they don't. The more bugs we have, the less pride of workmanship we have, the less accomplished our team. If you are working on features, you must be trusted and skilled. If on bugs, not so much. Features get a thank you and a party when they are done. Bugs get the silent treatment or a retrospective that discusses how late the release was because we had so many bugs. Maybe where you work the retrospective talks about how the features unfolded into additional unanticipated functional and systems requirements that were not part of the estimate but were resolved with overtime and heroic efforts from team members?

Making a feature is a task. Fixing a bug is a task. What we call bugs are often defects that relate to features we said were done, or missing parts that are due to incomplete understanding of functional or system requirements (or in Agile, user stories or use cases that are incomplete or missing alternative flows).

If our customer visible code is constantly constructed in advance of the underlying support, this is a design or project management problem. We have TDD and unit testing, so ideally, we can be pretty thorough in our testing and pretty selective about exactly when we expose features through the UI to testers, customers, or product management.

Projects often use rapid UI prototyping to show a user interface that has no code behind it. It does not do development any favors if product management believes that 80% of the work is done when that demo is shown, but the project continues to run for a long time. Weigh carefully how far you let your prototypes run ahead of your field ready product.

Agile aims to break the contract mentality and makes collaboration using prototypes more palatable. The relationship and communication between developers and stakeholders needs a high degree of give and take. It helps to methodically manage requirements using a burn down list or other way to limit scope and shorten the time line that is subject to estimation.

Part of our problem is how we track progress. If we have features described by percentage complete, we essentially take credit for something before it is accomplished. Estimates percentage complete are extremely unreliable. If we need to show progress, we should break things down to the level where we can say something that is considered important in our project that stands alone as a cohesive object that provides functionality is completely done. Instead of partially complete milestones, only use fully complete inch-pebbles.

For the work that you describe, if it brings more pride and motivation into the team, I definitely vote for calling it a task and not a bug.

","61659","","","","","2012-10-09 10:19:08","","","","0","","","","CC BY-SA 3.0" "87556","2","","87546","2011-06-28 01:37:34","","55","","
  1. Get everything in writing upfront.
  2. Never do anything for free. Sets a bad precedent for you and your peers. It destroys the local market.
  3. If a customer misses a payment, even one, stop work until they get current. Be professional and un-emotional but be firm. They are already into you for 30 days of work or more, don't dig a deeper hole. You aren't a bank, you are lending them money interest free at this point.
  4. Bill customers that miss payments, interest on the time the payment was late. Send as many invoices with LATE on them as you think you need to, don't be shy about the money.
  5. Get everything in writing upfront.
  6. If a potential customer won't agree to your terms, what makes you think they will be reliable and easy to work with on their terms. Be prepared to professionally walk away.
  7. Be willing to turn down work that won't be profitable. Or worse will cost you money or time being profitable.
  8. Never work on a break even project thinking you will make it up on the next the customer gives. You won't, you have set a precedent for them to expect to be able to low ball you.
  9. Get everything in writing upfront.
  10. Cheap customers are always cheap customers and will only get cheaper, more demanding and suck up all your time.
  11. Learn what a change request is, put this in your contract that they cost money and they push the schedule. Bill at least 25% more for change requests to make sure the client really needs them, just 1 or 2 change requests can sap all your profit off a single project.
  12. Learn to do Agile Methodologies, SCRUM in particular is a good way to manage customers, especially the ones that become difficult.
  13. Get everything in writing upfront.
  14. Never deliver anything sub-par, even if it is going to be late, crap on time is still crap. Crap gets you a worse reputation than late and quality.
  15. Your reputation is everything, it isn't what you know or do, it is what people say about you.
  16. Plan on networking at every user group meeting and the like to get the good paying jobs.
  17. Get everything in writing upfront.
  18. Get paid for every hour you work, don't be shy about the money, watch this video.
  19. Professional relationships are not you bending over backwards to please irrational customers with unrealistic expectations, they are about respect, your customer should see you as an expert and a professional, not a warm body filling a chair costing them money, don't take those jobs there is no profit in them.
  20. Breaking your own rules, even once sets a precedent to the customer that the other rules can be bent or broken, this leads to misery and loss of profits.
  21. Get everything in writing upfront.
  22. Fixed price jobs aren't the fixed price you will make, they are usually the amount that you will lose X 2.
  23. Spend more time learning about marketing and sales techniques and effective communication patterns that technology. As a consultant you should already be an expert in what you do. The other things you need to be an expert in now as well.
  24. Networking is important so you can delegate some things you might not be an expert in to a sub-contractor friend or associate, or at least lean on them for advice and education. You won't know everything but will be expected too.
  25. Charge enough for you time, your customers are not doing you a favor by having you work for them, you are doing them a favor by selling them your time and expertise. Low balling never helps you or your peers or the market.
  26. No matter how good the relationship with the customer is get everything in writing up front and never break this rule or do anything by word of mouth.
  27. Never work for friends, they won't be your friends anymore, especially not for free
  28. Never work for family either, see above.
  29. Never do anything free.
","","user7519","","user7519","2011-06-28 14:54:42","2011-06-28 14:54:42","","","","6","","","","CC BY-SA 3.0" "87788","1","87791","","2011-06-28 18:59:10","","8","573","

Not sure that this is the right stack exchange site to ask this of, but here goes...

Scope


I work for a small company that employs a few hundred people. The development team for the company is small and works out of visual foxpro. A specific department in the company hired me in as a 'lone gunman' to fix and enhance a pre-existing invoicing system. I've successfully taken an Access application that suffered from a lot of risks and limitations and converted it into a C# application driven off of a SQL server backend.

I have recently obtained my undergraduate and am no expert by any means. To help make up for that I've felt that earning microsoft certifications will force me to understand more about .net and how it functions.

So, after giving my notice with 9 months in advance, 3 months ago a replacement finally showed up. Their role is to learn what I have been designing to an attempt to support the applications designed in C#.


The Replacement

Fresh out of college with no real-world work experience, the first instinct for anything involving data was and still is listboxes... any time data is mentioned the list box is the control of choice for the replacement. This has gotten to the point, no matter how many times I discuss other controls, where I've seen 5 listboxes on a single form. Classroom experience was almost all C++ console development.

So, an example of where I have concern is in a winforms application: Users need to key Reasons into a table to select from later. Given that I know that a strongly typed data set exists, I can just drag the data source from the toolbox and it would create all of this for me. I realize this is a simple example but using databinding is the key.

For the past few months now we have been talking about the strongly typed dataset, how to use it and where it interacts with other controls. Data sets, how they work in relation to binding sources, adapters and data grid views. After handing this project off I expected questions about how to implement these since for me this is the way to do it. What happened next simply floors me:

An instance of an adapter from the strongly typed dataset was created in the activate event of the form, a table was created and filled with data. Then, a loop was made to manually add rows to a listbox from this table. Finally, a variable was kept to do lookups to figure out what ID the record was for updates if required.

How do they modify records you ask? That was my first question too. You won't believe how simple it is, all you do it double click and they type into a pop-up prompt the new value to change it to. As a data entry operator, all the modal popups would drive me absolutely insane. The final solution exceeds 100 lines of code that must be maintained.

So my concern is that none of this is sinking in... the department is only allowed 20 hours a week of their time. Up until last week, we've only been given 4-5 hours a week if I'm lucky. The past week or so, I've been lucky to get 10.


Question

WHAT DO I DO?!

I have 4 weeks left until I leave and they fully 'support' this application. I love this job and the opportunity it has given me but it's time for me to spread my wings and find something new. I am in no way, shape or form convinced that they are ready to take over.

I do feel that the replacement has the technical ability to 'figure it out' but instead of learning they just write code to do all of this stuff manually. If the replacement wants to code differently in the end, as long as it works I'm fine with that as horrifiying at it looks. However to support what I have designed they MUST to understand how it works and how I have used controls and the framework to make 'magic' happen.

This project has about 40 forms, a database with over 30 some odd tables, triggers and stored procedures. It relates labor to invoices to contracts to projections... it's not as simple as it was three years ago when I began this project and the department is now in a position where they cannot survive without it.

How in the world can I accomplish any of the following?:

  • Enforce standards or understanding in constent design when the department manager keeps telling them they can do it however they want to
  • Find a way to engage the replacement in active learning of the framework and system design that support must be given for
  • Gracefully inform sr. management that 5-9 hours a week is simply not enough time to learn about the department, pre-existing processes, applications that need to be supported AND determine where potential enhancements to the system go...

Yes I know this is a wall of text, thanks for reading through me but I simply don't know what I should be doing. For me, this job is a monster of a reference and things would look extremely bad if I left and things fell apart. How do I handle this?

","28040","Mohgeroth","28040","","2011-06-29 02:05:05","2011-06-30 22:29:58","Training a 'replacement', how to enforce standards?","<.net>","4","6","1","","","CC BY-SA 3.0" "170760","1","170822","","2012-10-21 07:23:02","","24","15032","

I am evaluating Google Protocol Buffers for a Java based service (but am expecting language agnostic patterns). I have two questions:

The first is a broad general question:

What patterns are we seeing people use? Said patterns being related to class organization (e.g., messages per .proto file, packaging, and distribution) and message definition (e.g., repeated fields vs. repeated encapsulated fields*) etc.

There is very little information of this sort on the Google Protobuf Help pages and public blogs while there is a ton of information for established protocols such as XML.

I also have specific questions over the following two different patterns:

  1. Represent messages in .proto files, package them as a separate jar, and ship it to target consumers of the service --which is basically the default approach I guess.

  2. Do the same but also include hand crafted wrappers (not sub-classes!) around each message that implement a contract supporting at least these two methods (T is the wrapper class, V is the message class (using generics but simplified syntax for brevity):

    public V toProtobufMessage() {
        V.Builder builder = V.newBuilder();
        for (Item item : getItemList()) {
            builder.addItem(item);
        }
        return builder.setAmountPayable(getAmountPayable()).
                       setShippingAddress(getShippingAddress()).
                       build();
    }
    
    public static T fromProtobufMessage(V message_) { 
        return new T(message_.getShippingAddress(), 
                     message_.getItemList(),
                     message_.getAmountPayable());
    }
    

One advantage I see with (2) is that I can hide away the complexities introduced by V.newBuilder().addField().build() and add some meaningful methods such as isOpenForTrade() or isAddressInFreeDeliveryZone() etc. in my wrappers. The second advantage I see with (2) is that my clients deal with immutable objects (something I can enforce in the wrapper class).

One disadvantage I see with (2) is that I duplicate code and have to sync up my wrapper classes with .proto files.

Does anyone have better techniques or further critiques on any of the two approaches?


*By encapsulating a repeated field I mean messages such as this one:

message ItemList {
    repeated item = 1;
}

message CustomerInvoice {
    required ShippingAddress address = 1;
    required ItemList = 2;
    required double amountPayable = 3;
}

instead of messages such as this one:

message CustomerInvoice {
    required ShippingAddress address = 1;
    repeated Item item = 2;
    required double amountPayable = 3;
}

I like the latter but am happy to hear arguments against it.

","26149","","4477","","2013-06-11 19:42:43","2018-07-22 14:52:18","Protobuf design patterns","","2","1","5","","","CC BY-SA 3.0" "277418","1","","","2015-03-26 04:38:39","","3","276","

First, sorry for my English guys.

Currently this is my first programming job. I am labeled as the most incompetent programmer in my company that's because they measure the performance and productivity of programmer by how fast you can get things done. I'm not really sure if I'm slow or not, because I always test(manual testing) my code before submitting it, and I'm pretty sure that most programmers here don't test their code the way I test mine. I don't do automated test because I admit that the concept is still complex to me. Since our software is not yet use by the user, we don't know which programmer has the most or fewer bugs. And also the system is for internal use only, so we don't have strict deadline. Time to ship is not that important.

We don't have best practices, automated testing, code reviews, and coding standard here in the company, so basically you are on your own, just make the code works and your fine. Almost all of the programmers here are fresh from college. Even me I'm a fresh graduate.

I think the reason why they are fast it's because they do all of the business logic in sql. So Basically they have all the UI code and Sql code in one .aspx file just like the code below

Patial class InvoiceView : Page
{
    protected void button_click(object sender, Eventargs a)
    {
     string sql = ""Select * from some blah blah blah"";
     DataTable tab = .....some Ado.net code here.
     Gridview.DataSouce = Tab;
     Gridview.DataBind();
    }
}

Before I got my first job(although this is my first job), I don't code like this anymore I usually use a custom object just like the code below.

Public class Invoice
{
     public int InvoiceNo {get;set;}
     Public DateTime PaidDate {get;set;}
     Public List<Item> Items {get;set;}
     public decimal Amount {
     get
        {
         decimal amount = 0;
         foreach(var i in Items)
         {
            amount = amount + i.TotalPrice;
         }
         return amount;
        }
    }
}

after that, I'm going to create a DataMapper class, and I'm pretty sure this is the reason why I'm slow, because I have to manually map the row table to objects and test the datamapper. So basically their is no ORM or micro ORM. Our database don't have referential integrety and tables always change. So I thought ORMs are not ideal for this project.

The person that labeled me as the most slowest is actually a junior programmer just like the rest. He has 2 years experience ahead of us, that's why he is our immediate superior. Sometimes I always think that the reason why he said that is because he is still a junior and no experience when it comes to managing a team of programmers.

I'm confident that I can do all the job they will throw at me.

Here is my question.

  1. Should I use DataTable and shove it to gridview just like the rest of my team do?

  2. When to use DataTable instead of custom objects or domain classes?

  3. Currently I only know two Data Access pattern, ActiveRecord and DataMapper. What do you call the pattern that my team uses?

  4. How can I code faster?

Thanks guys, sorry for my English.

","172524","","","","","2015-03-26 12:34:45","Development Time: sql in UI code vs domain model with datamapper","","3","2","1","","","CC BY-SA 3.0" "382534","2","","343669","2018-12-05 19:54:34","","1","","
  1. I always see that microservices are supposed to communicate with one another. Currently 2 separate microservices might call the same backend service. Is this correct design, or should I make one microservice A that calls a backend service, and 2 other microservices, B and C, that call A?

Answer 1: Deciding to split or merge microservices, or alter their communication structure is really dependent upon what you want to accomplish in regard to performance, security, network and code overhead, and your operational limitations.

You can answer your question ""should I make one microservice A that calls a backend service, and 2 other microservices, B and C, that call A?"" by answering a few questions such as:

  • How is performance impacted for all services when you make either decision?
  • How is security impacted for all services when you make either decision?
  • Which decision is easier for developers to reason about, or helps to simplify the code structure?
  • Are there any restrictions that would limit either decision (cost overrun with new servers/containers, external api rate-limits from third-party services, etc.)?

Once you answer the questions that matter you can find reasons why you would or wouldn't split a specific service.

Side Note: You are very clear that your organization designates the difference between a microservice and a backend service. Sometimes organizations use the wrong domain-specific language and group things in a way that is incompatible with the mental flexibility you need as a developer to do your job. To be clear: everything is a microservice. You shouldn't be afraid to open a REST interface on a ""backend"" service if the performance, security, overhead, and limitations matrix are in favor of this decision... because ""it's just another service"".

IF your definition of backend services are shared databases or queue services, then you're not doing microservices the right way. Split all of those tables off into database instances attached to their respective services. Services should not share the same database schema, but should share data with their defined (REST / websocket / network) interfaces. The answer in this scenario would be ""make one microservice A that calls a backend service, and 2 other microservices, B and C, that call A"".

  1. We want to push some formatting from the UI to the microservice. For instance, we always want to format a phone number from backend service A to have dashes. 5552223333 to 555-222-3333. Should I have a formatting microservice I pass through, or is it best to do it on each microservice that calls backend service A?

Answer 2 Shared formatting (or algorithms) should be placed in a shared module and installed in the code closest to the transformation. I would suggest the code necessary for parsing, formatting, and validating phone numbers be moved to a shared module that can be installed on each microservice that must handle phone numbers.

Side Note: If you disagree with the shared module idea and opt for a ""phone number formatting service"", then let's explore how that scales. What happens when you need an ""address formatting service""? Is this a separate service with all of the code, network, and maintenance overhead? Do we open up new REST endpoints on the ""phone number formatting service"" and blur the lines of single responsibility? The easier solution would be to create a new shared module (which has a low code, network, and maintenance overhead), isolate all of the address logic in that module, then include it in any service that must handle that logic.

  1. Each web app only communicates with it's specific microservice. Should web apps communicate to multiple services rather than just the one? This would get rid of the duplication I see throughout the microservices.

Answer 3: This is a tricky question. In an absolute and ideal world; using one microservice as the facade for each web app is the right decision. This buffers erratic user behavior and performance issues from your core services. It also ensures clean separation of the client-specific logic from the core logic.

However, the real world answer is more organic. Start developing the web app and make requests directly to the services. If the capability of the web app grows beyond what is manageable with point-to-point communication to each service then build a facade.

Most likely the first people to speak up will be the mobile devs asking for a single aggregate endpoint/service they can call instead of many endpoints/services.

Side Note: You should get rid of the code duplication in your services independent of the answer you choose here. If there is code duplication across your services then move that into a shared module with a simplified interface, then include that module in your services and remove the duplicate code. You will still have some code duplication, but it will be code that uses the public interface you designed (stable), instead of specific business logic on how to handle a problem (less stable).

","321856","","321856","","2018-12-05 23:05:06","2018-12-05 23:05:06","","","","0","","","","CC BY-SA 4.0" "89371","2","","89048","2011-07-03 19:06:02","","4","","

Implicit in the question is the assumption that you should at some point create your own business. Running your own business, either as a freelancer/contractor or with your own products, has a completely different risk/reward structure to working as an employee. There's no shame in saying that you like programming and not the business stuff.

At the simplest level you're going to have to deal with bidding for work, invoicing and the legal requirements of running a company. Are you prepared to do that as well as doing your normal day job?

On the financial side, there are a number of other aspects worth considering. You may not get paid every month. People pay late, or possibly not at all. Many people prefer a steady income and can't deal with the uncertainty, even knowing that, on average, you could well be better off.

Finally, you really are in charge of your own career development when you go it alone. Do you know your strengths and weaknesses? Are you prepared to learn (and possibly fail) or pay to go on a training course (losing billable hours)?

I'd say that if you're aware of all the above and know what you're letting yourself in for (or at least are prepared to risk a very hard few months if you're wrong!), then now is as good a time as any.

As for skills, I am sure you're fine on the technical side. I've known people go self-employed with less experience than you. Expect to find the sales/marketing/finance/compliance stuff harder. If you're looking to found a company rather than just freelance, I'd wait until you've found some good co-founders. You're going to be as good as married to them so you want to make sure that you pick the right ones!

(For what it's worth, I've just gone freelance and am setting up a company for my iOS development activities. I'm not sure I'd have been comfortable doing either until about now, and I'm in my late thirties.)

","1503","","","","","2011-07-03 19:06:02","","","","0","","","","CC BY-SA 3.0" "382987","2","","382973","2018-12-13 23:09:04","","5","","

Splitting the state management across the business logic and UI layers is a bad idea.

You keep going on about Finite State Machine (FSM) in a way that is fairly off putting. I think I know what you really mean but you sound like one of those goofs that turns off their brain and chants pattern jargon. You need to be clear about what you really mean by this.

The reason splitting ""state management"" across the layers is a bad idea is because you need a single source of truth. You really need to make this idea clear though because it's a tricky idea. The UI should not be where the state of your model of the world is kept. It should only be a reflection of what the user wants and knows. Nothing more. The world may have changed since the UI last looked it. There should be no decisions being made in UI code. ""State"" in the UI, if you insist on calling it that, should never be more than ""this is what the user selected"".

Done that way the UI is dumb. It's a pretty place to watch and click things. Nothing here even needs tests written against it because it's just boring obvious structural code. Nothing interesting allowed.

That means that logic you were going to put in the UI has to move somewhere else. I keep at least one layer between the UI and the model. That layer, which people give tons of different names, can soak up that homeless logic.

This idea even has a pattern named after it called the humble object. It's centers around the idea that objects near boundaries (like the UI) are inherently hard to test. So rather then kill ourselves trying to test the untestable we move the suspiciously interesting logic into a testable object that doesn't touch the boundary. Being easy to test is nice but it's not the main justification for this move.

By moving the logic into an isolated object you're free to define an interface/API for talking to it that makes sense in your domain. Something simple, readable, and free of details like understanding what a textbox is. So much so that you can get your DDD ubiquitous language going and write business rules that a domain expert, who's never written code before, could actually read and tell you if you have it wrong.

That bit of business logic is the guardian of the model/entities. It ensures that what we're doing to them follows the rules.

This suggestion didn't fly, and instead we've landed on a proposed solution which involves partially managing each entities state in the UI layer and partially in the business logic layer, which concerns me. For instance, the UI layer will query the business logic layer for entities and may possibly need to decide the initial state for an entity, depending on that entities current state. But the UI layer should only do this if the entities current state is x or y, not a, b or c. On top of that, the actual idea of using an FSM to manage the state at all seemed to be a no-flyer (perhaps I didn't explain myself well enough as the meeting was called at short notice).

The best thing to do when ambushed like that is to say ""I'll get back to you"". Don't agree to anything. Don't present anything. Don't let them get away with anything.

This is important stuff and your voice is being stiffed by meeting engineering. Build your own support base with one on one conversations then call your own meeting. It's nasty and political but it's the world we live in.

Your post mentions some ideas I like (regardless of whose they were):

  • The ui module, speaks to the business logic module to create, update, delete and get entities.
  • These entities can be in various states (not started, awaiting, started, ended as well as a few others)
  • The business logic module needs to consolidate the sources into a unified model for the UI module (as we don't want the UI module to be concerned with the nitty gritty details of the entity sources).

Some ideas I don't:

UI layer should manage the state of the domain objects

NO. Here I agree with you that the UI shouldn't know the domain objects exist. It should only know how to request things through controller logic and how to display responses from presenter logic. The UI shouldn't be able to directly touch those domain objects. If the UI can touch them then the domain isn't properly encapsulated.

I argued that the business logic module should maintain an FSM for each entity, such that the UI layer can issue a command to the business logic layer for a given entity, and that command will be executed if it's valid for the entities current state. If the command was valid, then the FSM would transition to a new state, executing side effect such as API calls if required.

Here you went wrong because you were giving them a design for their stuff. You should have stuck to getting inter layer communication requirements ironed out.

Now if the worst happens and they just wont work with you don't give in and shove logic into the UI. Make your own layer between the UI and their stuff to handle their shenanigans. Don't package this layer with the UI. Do that and you can ensure the UI has a clean API that you can test quickly when fingers start pointing.

TL;DR You need a better counter design argument than ""Use FSM"".

Some additional rants: MVC, Clean Architecture1,2, inter layer communication, abstraction

","131624","","131624","","2018-12-14 11:26:38","2018-12-14 11:26:38","","","","0","","","","CC BY-SA 4.0" "383052","2","","383041","2018-12-14 20:42:08","","3","","

Gathering data/input (= feedback) for the retrospective can be done before the retrospective meeting, in the meeting, or both (combined). Below are some advantages and disadvantages of each approach.

Gathering data before the meeting

You can collect data before the meeting using a shared document or workspace, like Google doc, Confluence, Wiki, Slack, etc. Alternatively, as the facilitator you can ask the participants to send their input to you where you collect it and will distribute it to the team before or at the start of the meeting.

Advantages:

  • Gives people more time to think about their retrospective input
  • They can prepare their input when time permits them (time and place independent)
  • Sometimes when people reread their input it makes them think of additional things or better formulations
  • Makes it easier for introverts to share their opinion
  • Possible to avoid groupthink (if people don't see each other's input)
  • The facilitator can review the input and ask for additions or clarification before the meeting
  • You can use questions to focus input on a specific topic
  • Easier for people who prefer writing over speaking (note that this can also lower language barriers for retrospectives in non-native languages)

Disadvantages:

  • People might forget to give input or don't have time for it
  • You may have to follow up if people aren't disciplined enough to deliver input
  • The amount and quality of the input can vary between people

Gathering data in the meeting

Advantages:

  • As a facilitator, you can interact directly with people when they give input
  • It ensures that there is time for everyone to give input
  • You can timebox it, and when needed extend the time window if more input is needed

Disadvantages

  • Part of the meeting time is spent on gathering input, so you may have less time for analysis and actions (unless you plan more time for the meeting)
  • People who are more vocal might inhibit others to speak up (there are ways to deal with this)
  • Might lead to groupthink where people who think differently about a topic don's speak up

Depending on your situation and the advantages and disadvantages I suggest using the approach which fits best. Or experiment and find out what works for you in which situations.

","275674","","","","","2018-12-14 20:42:08","","","","0","","","","CC BY-SA 4.0" "384314","2","","314386","2018-12-20 02:13:45","","2","","

Let's say you're writing some image editing software in Python similar to Photoshop:

And you're collaborating with a team of people to develop. The main point of the software is to edit images, and sometimes those images can be very high in resolution (4000x4000 pixels with 32-bit channels, leading to ~256 megabytes per image/layer).

In that case you might ask where you need to store or reference images. And the most obvious place is probably the layer system which is where the user adds/deletes image layers. Then the collection of layers might be owned and referenced by a ""document"" (which is a collection of layers). On top of that you might have smart filters which might want to reference layers to which they're being applied, along with layer styles. You have an undo system which probably wants to store undo actions which reference the layers they affect. The list goes on and on but many things might come to reference images and reference the things that are referencing the images.

When Should Images Be Freed?

And here's a question: when should the images be freed? Oddly I sometimes find GC enthusiasts turning this into some complex technical or theoretical CS question. It's not a complex question; it's not a trick question. It's a simple user-end design question.

The memory for those images should be freed when the user tells us he/she longer needs those images around, when they delete layers, close entire Photoshop documents, etc (give or take, perhaps some background thread needs to finish a tiny bit of processing first before those images are freed, but the ""freeing process"" begins when the user tells us they no longer need the image(s) around). That's when the memory should be freed, especially in a software where the user's document might require gigabytes of memory when they're doing very high-res work where it could become a serious usability issue if the images just linger around in memory without being freed until the entire software is closed.

Resource Management

But you can probably imagine if your colleagues get sloppy and start storing lifetime-extending image references all over the place without thinking, and lifetime-extending references to those things that reference images, and so forth, that you might very easily run into a scenario where a user requests to close a document and nothing gets freed because some thread or object that still persists is still referencing something that prevents all these things from being garbage collected whose lifetime is tied to the entire duration of the application.

You could find yourself looking at a mess like this as far as places storing lifetime-extending references to images (and this is not even nearly as gross as some examples I've encountered in the real world; this diagram is actually rather simple in comparison):

And that's a lot of places to forget to remove the appropriate references if we want to make sure that image is freed before the entire application is closed.

So GC is not a silver bullet against resource management of this sort for long-lived applications (not like one that does a computation and shuts down) where the lifetime of objects is tied to things the user is doing. In those cases you might not have to write low-level C code prone to errors like dangling pointers like:

free(memory);

But you might still need to remove object references from all the relevant containers (hash table is just one example) and background threads storing such object references and so forth in response to the user's input. It's still inevitably a manual affair because the user is the one telling us when the resources are no longer needed; it's not something that can be deduced automatically.

So one of the solutions to help prevent these types of leaks that Robert already mentioned are weak references, and I do wish more people paid attention to them for these types of applications when working with GC. They can really help a lot to prevent these types of mistakes and corresponding leaks which can be really difficult to trace down in hindsight.

Ah, OK! If it's mostly global objects that I should worry about, then it is not that difficult!

It's not just global objects. It's ""persistent"" objects. Those images in Photoshop might not be globally accessible. They might use dependency injection and limit scope and access by passing these images around to relevant places via parameter. Thread objects running in background might be given them via parameter and store them as member variables/references. Data structures being passed around via parameter might store references to them. But they are persistent state, as in their scope isn't tied to some local function which just computes something quickly and returns some output. In this type of software there's a notion of a persistent ""application state"" (ex: state local to somewhere close to the main entry point of the application), and it's not necessarily globally accessible, but it does linger around and persist for the duration of the application, and it's only going to free things (like images) under GC that it no longer references whatsoever in any part of its encapsulated members and the encapsulated members of those members and so forth along with auxiliary thread objects and what not.

So if you're writing software like this, with or without GC, you have to think clearly and carefully about resource management. That's a huge and unavoidable part of the technical design if you want to make sure it doesn't just continue to use more and more and more memory until the user shuts the whole thing down.

My question is: when should I worry about freeing objects from memory in Python?

For these types of applications, it's the same whether you're using Python or Java or C or C++ or anything else. The strategy to resource management to me begins with asking, ""Who owns what?"" It might be easy to think everything should ""share ownership"" of images that needs to reference them, but that's getting seducted by the temptations of GC. From a user-end standpoint only layers should own images. Only the document should own layers. Things of this sort. You might make one exception for the application history since it needs to make sure the necessary data is kept around to be capable of being undone depending on how you implement it (I'd find shared ownership reasonable there, but there should be a separate history per document, and not one for the entire application).

If you use weak references everywhere else that doesn't require ownership, that's already a great start. You might acquire strong references for short-lived durations in threads to ensure the objects don't get garbage-collected until the thread finishes processing it, but that's for short-lived durations inside some local function. Be very careful with where you store the persistent lifetime-extending references, because every time you do that, unless you're careful to null out/remove references at appropriate times (ex: in response to the proper user input events), that could translate to a logical leak.

Similar to C where every kind of memory deallocation is completely manual (even short-lived memory, not persistent application state), it's a nice habit to write your code to free the memory as soon or even before you even write the code to allocate it. Similarly if you're implementing like that hash table to cache objects in your example, I think it'd be a smart idea to write the code to remove the object reference from the hash table at the appropriate times if you can't use weak references, perhaps even before writing the code to insert the object to it, and test it and make sure that removal part works because it could easily fly under the radar of testing like a stealth fighter bug with silent leaks unless you very specifically test for that case. With GC I'd double up my testing efforts here because while it's not susceptible to dangling pointers whatsoever, the types of leaks you can get in exchange if you don't remove these references at the appropriate times can be very difficult to detect and trace down otherwise (the mistake might not be as ""fatal"" to the application but more difficult to detect since it'll appear to work fine except it's not freeing memory at the relevant time).

","","user321630","","user321630","2018-12-20 03:05:18","2018-12-20 03:05:18","","","","0","","","","CC BY-SA 4.0" "384720","2","","365349","2018-12-30 15:14:39","","1","","

We are building a new product in real estate space and the end users of this product are not so tech savvy. To have better user experience with our product, we want our users to find relevant things quickly and easily. Apart from a simple UI, a universal search bar seems to add value.

The search bar with auto-complete will allow users to find information such as - their billing history (past payments, invoices..), help content, support content from helpdesk tickets, data from chat history and such.

I think you've failed to do the first step of software design (researching use cases) so you're throwing in all kinds of nonsense into your project (a universal search, chat, a kitchen sink, maybe an ice-cream machine on Thursdays) using "shotgun logic" (more projectiles means more chance of hitting a target, even when you don't know where the target is).

If you want users to find relevant things quickly and easily, then you need to know what is relevant for the specific user at the time. For example, someone looking for a house to rent (in a certain area, in a certain price range) is not going to want a universal search that pollutes the search results with garbage intended for people selling a house. They don't want a universal search, they want a "use case specific search".

For another example, someone looking for their billing history will want to log in and click a "billing history" link to see a chronological list, and won't want a universe search (and probably won't want any kind of search).

","25811","","-1","","2020-06-16 10:01:49","2018-12-30 15:14:39","","","","0","","","","CC BY-SA 4.0" "384752","2","","176049","2018-12-31 12:26:05","","3","","

OK, let's map this to some core properties rather than abstract concepts that only make sense once you understand what they mean. Like some commenters I do not agree with the accepted answer, I say these are concepts independent of memory management.

Encapsulation

You want to hide complexity from the client, only publishing the stuff that matters from the client's point of view, making things easier for the client. As a bonus you get the certainty that nothing can mess with the encapsulated code. As long as you respect the interface and the functionality, you can rework stuff and rest assured you will not break anything. The dependency is on the published interface only.

Encapsulation is one of the main pillars of object orientation. It it not a pattern, it is a principle and it may apply to logic and data alike. It is just a basic benefit of using classes in the first place, not something you would see explicitly stated in a diagram or design document.

Association

This is a very loose concept that basically describes just a dependency between objects. One object knows about the existence of another object and may use its functionality at some point. In a diagram the association would alert you that there is a dependency and that changing one object may impact the other. It is not a technique to apply when you have some problem to solve, it is more like a fact of life you should be aware of when it is there. It is a relationship. Like an invoice having an Orders property. Both Order and Invoice have their own life cycle. One is about goods and the other is about payment, which essentially makes them independent but it is important to know what goods are being payed for.

Containment

I am adding this because it belongs in the series and will make aggregation more meaningful. I do not hear the term being used in a SE context a lot anymore but I think it is still useful. Containment implies encapsulation but is strictly about object instances private to the containing class. The functionality of the contained objects is selectively exposed through public interfaces. The containing class controls the life cycle of the controlled objects. You use this when you need some features of an existing class to make the containing class functional. This could be an XML parser and the client of the containing class may never see or know anything related to XML. As a metaphor, think of the contained object as a back office worker. Clients never meet these people yet they are needed to provide the service.

Aggregation

This is a lot like containment except for the life cycle control and visibility of the aggregated objects. The aggregated objects are already available in a different context and are managed by a different entity. The aggregator is merely offering a facade, a portal to the aggregated objects. When the client addresses the aggregate, it gets the interface of the aggregate object itself, not a wrapper around it. The point of the aggregate is offering a logical grouping of things. Think of an access point to services or some other wrapper object.

Composition

It seems to me this is the more contemporary term for containment, possibly because it was coined in a popular book of relatively recent origin. Where containment focusses on the technical aspects of object relationships, composition is typically used in the context of design decisions, more specifically as a more flexible alternative for inheritance.

It does not say much about the nature of object relationships or ownership, it merely indicates that functionality is implemented by combining the functionality of existing classes. Therefore I would argue it does not belong in this series because it does not say anything about the technical aspects of an implementation where the others do.

","209665","","","","","2018-12-31 12:26:05","","","","0","","","","CC BY-SA 4.0" "384943","2","","384259","2019-01-04 02:43:03","","2","","

Cross Purposes

This is a complex problem, so to be clear the issues that need to be solved are:

  1. Display - How should data be rendered so as to appear sensible to a human
  2. Intent - How should the platform behave around the data
  3. Response - What should the platform collect, and what should the platform do with it.
  4. Integration - How to manage the interests of several sources of configuration.
  5. Configuration - How do you express all the necessary data to orchestrate your application.
  6. Communication and Verification - How do you communicate this to humans, and allow those humans to be sure that they got it right.

You probably have solved some of these. I'm detailing points 1, 2, and 3 to setup my argument for points 4, 5 and 6.

Display

This is a solved problem: MVC and any of its variations will do.

A useful variation runs roughly:

  • The UI is composed of Models, Views, and Controllers
  • Models can be composed of models.
  • Views can be composed of views.
  • Controllers can be composed of controllers.
  • A view renders one model, and has one slot for a controller per kind of interaction available with that view.
  • a model can be rendered simultaneously or serially by many views, and has one slot for a controller per kind of state change.
  • A controller acts on models and views in response to an interaction with a view, or a model changing state. The controller takes a model describing salient facts about the event, and transforms it to perform its interaction.

Models

Define a small set of models for representing specific types of data.

  • Boolean, Integer, Text
  • Lists of X
  • Graphs of X vs Y

As many or as few that form primitives for your display. Where possible define models in terms of more primitive models.

Views

Define a set of views for rendering data to the display device. A good rule of thumb is to have a one-to-one with each model, but it is perfectly reasonable for a model to not have a view, or for there to be many views for that model.

Where possible make views delegate responsibility for rendering submodels, to subviews. But this does not always make sense, like displaying a graph in tabular (each cell using the integer subview) vs pictorial form (the list of integers for a series using a line subview).

Controllers

There are three types of controller:

  • Primitive controllers that perform one single well defined state-change on a model, or view.
  • Composite Controllers that compose other controllers to form some sort of sequential, controlled repetition, or selective behaviour.
  • A Business Controller that passes a copy of a model to the business logic.

These should be defined clearly, and simply.

The models passed to the controller should be the necessary and sufficient description of the interaction or state change such that no one needs to know who generated the event in order to obtain further details. For any particular case this might be empty, one or two values, right up to a copy of the the entire state of the view and/or model.

Interacting with Business Logic

From the perspective of the UI, the Business logic is just another user - it just doesn't need a picture drawn for it. That being said it still needs a view, or views. This provides a model that can be read, and a set of actions that the business logic can perform.

Configuration and Schemas

The configuration for the UI will hold:

  • view configuration (font size, alignment, which sub-views are composed, ...)
  • model configuration (the specific data to be displayed, captured)
  • controller configuration (to link up how interaction with a view changes state in other views, and/or models)
  • a schematic for linking views, models, and controllers together into a whole.

To be clear the configuration can supply many useful defaults, or a process to obtain a nuanced default, or user originated configuration. It may even make use of templates detailing common setups of model/view/controller. Placing all the burden of UI specification at the upstream services feet, is just moving the problem upstream and turning your platform into a fancy UI library/web browser.

The minimum configuration needed would be just the configuration for the models.

Intent

This is also a solved problem: Its called business logic.

This might live on the client, the server, or be somehow divided or shared. However the layer is organised, there will be processes that are responsible for modeling or orchestrating certain behaviours. Expose these processes to be configured by a configuration object.

Be careful in what you allow to be configured from your business layer, it represents a sizeable security risk as it is a form of unrestricted eval() using other peoples code in your application. The configuration object is a form of code.

This allows the application to behave fundamentally different every day based on the data received. For example: collected results may be forwarded to another service, or it may send a report to the UI's user.

Response

This is a harmonious relationship between the Rendering and the business behaviour.

The idea is that some of the UI models can be configured to allow the Human to input a response through clicking, key presses, voice, etc.. which populate the views respective model.

Then the business layer receives the model and proceeds to validate, and act on the data in that model as the configuration requires.

Of course if you only want a simple response, a direct model to text mapping, and sending via a file, or upload. If you want more complex interactions, and more configurable interactions, then the configuration of the UI and Business Logic Layer will become more complex.

Integration

The devil is often in the details. The application is serving as a Platform for two or more upstream definitions of behaviour. To get anywhere you will need to address how they will co-operate.

The no fuss answer is not at all. Each upstream service is dealt with independently. The user may be able to switch their views between each like tabs in a browser, but each functions independently of the others.

On the other extreme is the route of web-pages. The page is broken down into sub-pages each being sourced from potentially different sources. Each sub-page holds its own models, views, controllers, and schematic for assembling that part. They are even capable of communicating with each other by affecting views (some rendered other not) exposed by other parts of the overall page. This can become a mess as there are a number of actions that can be taking by one sub-page that are detrimental to other sub-page or to the overall page.

In the middle is a hard ground. Here the competing interests of the services are constrained by well defined collaboration interfaces, the more of these interfaces supported the more capable the platform, and the more likely that bad-actors can ruin the platform for everyone.

Configuration

The shape of the configuration objects that are consumed by the views, models, controllers, display schema, and business process is your schema. There is nothing else that your platform can understand.

The main problem seems to be a lack of understanding of your platforms schema. Which is reasonable, any large application has a mind boggling schema.

My suggestion is not to worry about self-describing data. What you want to know is: from a given service, what is the top level configuration object?

This piece of information alone completely describes how that service will interact with your platform. If all of the upstream services provide the same object then this becomes really simple, otherwise there will be several high-level entry-points.

The top-level configuration object configures a Business Process. From here the process is responsible for unpacking that top object, which includes defining the data shape. Each sub-configuration object is passed off to another business process. Each responsible for unpacking the top object, defining what makes that object valid and distributing sub-configurations.

Each process that is interpreting the meaning of a configuration object is responsible for assembling the results of each sub-configuration, along with any warning, or error diagnostics and passing it back without side-effects. These processes should not cause side-effects because it undermines the utility of process. By having no side-effects while comprehending, errors can be handled without messy cleanup procedures, the state of the comprehended configuration can be inspected by tests, and it allows for a clean modularisation. Ideally the exact types being configured would be hidden through an injected factory.

If side-effects must happen before the configured state returned is usable, add an activate() or similar method to the result. This allows the process needing a configured output to decide if it needs those side-effects or if its just interested in validating.

Communication and Verification

Now you could try and find a third-party data definition language, or force the configuration to carry useless validation data that your program does not care about.

Or because the business process for comprehending the configuration was constructed modularly, this can be placed in a library. That library plus a factory that constructs simple data objects instead of the complex platform objects, can be placed in a simple application that verifies text files, etc... for the expected configuration format, and display diagnostics should an error be encountered.

You will still need to communicate what the configuration format is between humans. But this can be done at a much more human friendly level in a human friendly language. And the humans can submit their mocked up data, or system output to the custom validator and get a real diagnostic back, about how the real system will interpret it, because the real comprehension (and validation) is being used.

","319783","","","","","2019-01-04 02:43:03","","","","0","","","","CC BY-SA 4.0" "384989","2","","384986","2019-01-04 20:11:12","","2","","

Despite your efforts, part of your question is still off-topic (recommendations attract opinionated answers that won't have lasting value to others). I'll attempt to answer the other part in a way which would make it valuable for other people.

There are a bunch of authentication mechanisms out there. You may want to check popular APIs such as Twilio, Google's APIs or Amazon web services to get an idea of how simple or how difficult it could be. Usually:

  • When the goal of the API is to be able to identify you as a consumer of that API (for instance for invoicing purposes), then you'll be provided with an identifier and a secret.

  • When the goal of the API is to give programmatic access to the data of a person, OAuth or OpenID will be used.

Aside those two mechanisms, there are more esoteric ones. For instance, all APIs I produce rely on client-side SSL certificates, which makes it possible to have a good level of security at nearly zero cost, the code being of extreme simplicity. While those mechanisms can present some benefits, I wouldn't advise you to use them unless you know the audience (i.e. the programmers who will be developing software which would use your API) are familiar with them. In general, this means sticking with ID/secret or OAuth/OpenID.

One of the benefits of using those common mechanisms is that most frameworks implement them for you. Never, ever try to implement your own authentication; unless you're a security expert, you'll get it wrong.

Some of those frameworks would handle the error handling for you, i.e. send a coherent response to the user who was unable to authenticate. When this is not the case, do not use redirects; instead, return an HTTP 401 or HTTP 403 depending on the case, possibly providing a error message. The message should be short and should not contain technical details; specifically, never include a stack trace.

When defining the interface of your API, remain technology-agnostic. The user shouldn't know nor care if you're using .NET Core or Ruby on Rails. This includes using proper terminology; for instance, the user shouldn't have to know what claims are.

As with any authentication, be prepared to spend a few hours implementing a solution which works, and months refining it. Edge cases are usually tricky and security is at stake. For instance, how do you protect the authentication part of your API against brute force attacks, while being sure your legitimate users won't be blocked if, at some point, they use a wrong configuration with a wrong secret key? Or, when the hashes of secret keys leak because you were hacked, how do you inform your users about this incident?

since this is an API I assume I need to worry about authentication

It depends on your business case. Some APIs are public, and so they have no authentication. In general, it's a good idea to put authentication even for an API serving non-personal data for free in order to protect yourself against abuse.

","6605","","","","","2019-01-04 20:11:12","","","","1","","","","CC BY-SA 4.0" "91315","2","","91017","2011-07-10 03:02:48","","1","","

Not all of the developers on the project have the domain knowledge to make these decisions well

If some developers lack deep understanding of the system's purpose or customer requirements (knowledge of the customer's domain), then they will take longer time to write, and the code written will require more testing and more rework. This is the typical trial-and-error approach to learning, and is both a cost and an investment.

If some developers did not design the architecture well, then delegate the design work to a better architect. It takes more than domain knowledge to design well, e.g. architecture wisdom, for which some programmers do better than others and which doesn't grow linearly with experience. There are two ways this can work:

  • The architect can specify the CRC, and the interface methods, and let other developers implement the details.
  • The architect can come in after-the-fact, and oversee the refactoring of the project while the code/spec/tests had already been stabilized.

If you want the software architecture to match nicely with the purpose, well ... switch team, or switch company. DDD usually have a ""non-experts need not apply"" policy.

Typically, the person writing the specification/test is not the developer who will write the implementation and make the specification/test pass.

If one specification/test is too general/vague/huge for the developers to implement ... break them into smaller specifications/tests.

Do I solve this problem (the specification/test) with an existing object(s), or do I create (for the first time) the dependencies which will allow the system to pass this specification/test?

In your situation, I suggest embrace rework. Sometimes the answer to the create/reuse question is not obvious. Ask an architect whenever available. If not, flip a coin.

... we're colliding so far as disparate designs and objects created in relative isolation begin to conflict and duplicate each other.

Here is one approach that has worked for me in a smaller project. It is probably not scalable for larger projects.

Each team member begins a day's work by reading all of every other team member's code changes.

(This is suitable for ""everyone in one giant big room shared with other teams"" settings as it can be done in absolute silence. It is also suitable for geographically dispersed teams as it does not occupy face time. Co-located teams which have dedicated office rooms might find pair programming more suitable.)

(The effort is manageable up to 10 source commits per day or 500 lines of change.)

","620","","620","","2011-07-10 03:15:06","2011-07-10 03:15:06","","","","0","","","","CC BY-SA 3.0" "385509","2","","385497","2019-01-14 19:42:24","","25","","

The Robustness Principle--specifically, the "be liberal in what you accept" half of it--is a very bad idea in software. It was originally developed in the context of hardware, where physical constraints make engineering tolerances very important, but in software, when someone sends you malformed or otherwise improper input, you have two choices. You can either reject it, (preferably with an explanation as to what went wrong,) or you can try to figure out what it was supposed to mean.

EDIT: Turns out I was mistaken in the above statement. The Robustness Principle doesn't come from the world of hardware, but from Internet architecture, specifically RFC 1958. It states:

3.9 Be strict when sending and tolerant when receiving. Implementations must follow specifications precisely when sending to the network, and tolerate faulty input from the network. When in doubt, discard faulty input silently, without returning an error message unless this is required by the specification.

This is, plainly speaking, simply wrong from start to finish. It is difficult to conceive of a more wrongheaded notion of error handling than "discard faulty input silently without returning an error message," for the reasons given in this post.

See also the IETF paper The Harmful Consequences of the Robustness Principle for further elaboration on this point.

Never, never, never choose that second option unless you have resources equivalent to Google's Search team to throw at your project, because that's what it takes to come up with a computer program that does anything close to a decent job at that particular problem domain. (And even then, Google's suggestions feel like they're coming straight out of left field about half the time.) If you try to do so, what you'll end up with is a massive headache where your program will frequently try to interpret bad input as X, when what the sender really meant was Y.

This is bad for two reasons. The obvious one is because then you have bad data in your system. The less obvious one is that in many cases, neither you nor the sender will realize that anything went wrong until much later down the road when something blows up in your face, and then suddenly you have a big, expensive mess to fix and no idea what went wrong because the noticeable effect is so far removed from the root cause.

This is why the Fail Fast principle exists; save everyone involved the headache by applying it to your APIs.

","935","","-1","","2021-10-07 07:34:52","2019-01-15 20:29:32","","","","12","","","","CC BY-SA 4.0" "92862","1","92891","","2011-07-14 14:13:18","","431","20301","

A little background first. I'm a project manager at medium-sized company. I started as a CS major and had a little exposure to programming, but after a few months I knew it's not my path, so I switched over to management. That proved to be a good decision, and after graduating I've worked in software management at various companies (for 5 years now).

Recently, we had a very painful project. It was the worst of the worst, with many mistakes both on our side and on the customers side and just barely ending it without losses. It has led to many frustrating situations, one of which escalated to the point where one of our senior developers left company after a vocal argument with us (the management). This was a red flag for me: I did something terribly wrong. (for the record, the argument was about several mistaken time estimates)

I searched many places for answers and a friend pointed me to this site. There are many questions here about frustrations with management. I can understand that the general bad experiences lead to a general reluctance against ""those guys in the suits"".

I'm that guy in the suit. It may not look like it, but all I want is a successful project, and with limited resources it takes painful decisions. That's my job. One of the things the aforementioned senior developer complained about was work equipment. Frankly, I had no idea that the computers we had were not suited for working. After this, I asked many programmers and the general consensus was that we need better machines. I fixed that since then, but there was obviously a huge communication gap between me and the programmers. Some of the most brilliant developers are the most shy and silent people. I know that, and it was never a problem during an interview. People are different, and have strengths at different areas.

The case of the underpowered PCs is just one of the many that led me to thinking that there is a communication issue. How can I improve communication with programmers without being intimidating and repetitive?

What I'm hoping is that people don't complain about good things. If you love your workplace and love (or at least like :)) your manager, please tell me about them. What are they doing right? Similarly, if you hate it, please describe in detail why. I'm looking for answers about improving communication because I think that is my problem, but I might be wrong.

","31332","","","","","2013-06-12 12:28:50","I'm a manager. How can I improve work relationships and communication with programmers?","","34","19","258","2013-10-30 10:48:01","2011-07-14 16:28:03","CC BY-SA 3.0" "92876","2","","92862","2011-07-14 14:33:45","","4","","

Consider what kind of reaction do you give a programmer that may have a question, comment or concern. Is there a, ""What do you want now?"" or ""Why are you bothering me with this?"" kind of a response? How well are you at encouraging the developers to voice concerns and comments? That is merely a starting point though.

Secondly, be careful of where you are trying to have these discussions. I doubt I'd be very open discussing my work machine with someone in the next cube if I knew my manager was within earshot of hearing the whole thing. If you want people to give open and honest feedback, there has to be some privacy given to knowing their answers aren't going to be publicly broadcast or used against them.

Third, consider what kind of Emotional Intelligence skills do you have. Emotional Intelligence for Project Managers: The People Skills You Need to Achieve Outstanding Results by Anthony Mersino would be a book recommendation I got yesterday from a Lunch and Learn about EQ. If you really want to get deep into psychology here there are various personality profile tools that could be used,e.g. Enneagram, social styles, and MBTI.

Lastly, consider what is the culture in your company. Are mistakes something swept under the rug? Are complaints a big no-no that could get someone in trouble really easily? What behaviors are rewarded or encouraged and which are tolerated and accepted? While some of this is observation, some of it may also require some conversations that should be held either away from the office or in a room where there isn't likely to be eavesdropping. You will likely be repetitive in trying to use this in the beginning. That isn't a bad thing if you are trying to establish a new practice and get people on board with speaking up if the culture was primarily one where everyone just knew to ""suck it up."" This may be more touchy-feely than other answers but this would be what I'd give for an answer if I was asked about this where I work.

","4327","","","","","2011-07-14 14:33:45","","","","0","","","2011-07-14 16:28:03","CC BY-SA 3.0" "92884","2","","92862","2011-07-14 14:52:47","","16","","

In general, the guys in the trenches start feeling mutinous when they feel their gripes aren't being heard by people who can and will fix the situations. When they don't even feel they can gripe without risking their standing in the company, that's even worse.

I'm a Theory-Y kinda guy, and most ""knowledge workers"" tend to be; Given a chance, we like our work, and want to do it well. However, the downside of a Theory-Y workplace is that it may not be immediately apparent there's a problem, because people, wanting to do well and thus not wanting to make waves, will find ways around that problem, or simply ignore it. This leads to pent-up frustration, that eventually blows up in the entire team's face. A shop run by a Theory-X manager would probably have guys that complain much earlier; the employees are only in it for the money, so if the job sucks more than usual they'll gripe.

As for what you can do, in an environment with seniors and leads in the room doing the job, they are your best asset. Talk to them. You might schedule 30 minutes a week for ""two-ways"", where the leads give you updates and air concerns about the day-to-day of the project, and you give them updates on the business side and plan with them to resolve concerns before they become problems that seriously affect the team.

In Agile, at the end of each ""sprint"" or ""iteration"" (which is a unit of development work usually lasting between one and three weeks), the entire team, from the most junior members up to the PM, has a ""retrospective"". They look back at what they did, what went right, what didn't, and identify things to keep doing and things to change. There are several formats, and you can invent your own, but the result of the retro should be that people feel their voice was heard, and that things will change as a result.

Talking about Agile; my first job was for a small company, and by ""small"" I mean the whole firm couldn't field a softball team. There were four programmers when I started, and that dwindled to two before I left. The founder, President, CEO and 95% stakeholder in the company ruled it with an iron fist, and he was the sole source of planning in the organization, meaning there wasn't much. The Boss was a workaholic and expected everyone else to be as well; Everything you had to give was no more or less than his expectation, and for this he paid an entry-level salary to people who'd worked for him for a decade.

I left that company and began work for another firm that did things VERY differently; they practiced the SCRUM Agile basic methodology, with daily standups, pair programming, sprint teams and retrospectives. For one day every two weeks at the beginning of each sprint, we did nothing but plan out the next two weeks' work. For a big chunk of another day, we did nothing but look back on what we'd done and find ways to improve as a team. There were developers sitting next to me who were Microsoft MVPs, getting the job done, and encouraging and complementing what I was doing.

Night. And. Day. The main difference? First, I did not feel like I was expected to kill myself for the project; a fundamental tenet of Agile is the sustainable pace of development. Second, I had a voice in deciding how I would be expected to do my work. I had to do the work, but if I had ""heartburn"" over what I was going to be expected to pull off in the next sprint, I could voice that opinion and it would be heard and given weight. If I felt there was a better way, I could say so and it would be entertained.

As far as finding solutions and resolving problems, you must be careful not to look like you're working from the top down. For computers; say your RMR (recurring monthly revenue) only allows the company to upgrade four computers ever two weeks. The first four computers should not all go to the leads and seniors; they should go to the people with the slowest computers. If you give bonuses to the team, don't just give them to ""our valuable seniors and leads, without whom this wouldn't have been possible""; EVERYONE in your dev team made it happen. If a junior has a complaint, hear him out; just because he's a junior doesn't mean he doesn't know anything.

","19295","","19295","","2011-07-15 00:34:58","2011-07-15 00:34:58","","","","3","","","2011-07-14 16:28:03","CC BY-SA 3.0" "92900","2","","92862","2011-07-14 15:44:39","","20","","

Clap! Clap! Clap! You certainly should be a good person, for you have come out in open to see what can be done to get better at your job.

Please find below what I have witnessed in a great manager, and have personally adopted when I lead the team as a senior member.

  • M entor more than manage.
  • A llow team members to voice their thoughts and concerns. Be all ears to it. Take the constructive ones.
  • N ever betray team members by playing divisive politics. This back-fires sooner and silently.
  • A nger not. Never have grimaces on your face when you are with your team, come what may. This one is really difficult.
  • G enuinely and openly appreciate the winner for his/her good work. In the same breadth, very softly and tactically cricicize the work not person for any wrongs, to the person who is responsible, in isolation and not in open.
  • E ncourage ownership and leadership in every individual. This boosts the morale and commitment of the person, because he would feel respected.
  • R oam around with your team once in a while. This one induces/increases bonding within team members.

Wish you good luck in your sincere endeavour :)

","30465","","","","","2011-07-14 15:44:39","","","","3","","","2011-07-14 16:28:03","CC BY-SA 3.0" "93195","2","","93138","2011-07-15 16:28:54","","4","","

Since they don't seem to know what SO is, I'd say start with that.

Simply put, StackOverflow, specifically (though its other objective sisters, such as Serverfault fall under this, too), has questions and answers that are objective, and therefore, provable. Either the proposed solution works, or it doesn't.

It's targeted. StackOverflow is specifically designed for programmers to help each other. Serverfault is specifically for server admins, and so on. Therefore, it's more likely to attract people that are well-known as experts in the field (for example, Phil Sturgeon, a big contributor in the CodeIgniter community, is an active SO member) than, say, Yahoo answers. If you ask a question on SO, there's a very high chance of it getting seen and answered by the high-profile, heavy hitters in that technology. Who better to ask for help on something than the creators of the technology?

It can be a passive way of finding answers. Generally, when I ask a question on StackOverflow, it's after I've exhausted my mental pool of Google search terms (which often lead to SO questions anyway, I'm still not sure how any programmer hasn't heard of this place anymore, but that's beside the point) and my own ideas for solutions. So, once I ask a question, I move on to other problems, so I don't get stuck in ""forest for the trees"" mode on that one, and wait for answers to come along. In that sense, I'm more productive, because I'm not spending more time re-searching and re-digging through Google for an answer that may or may not exist in writing yet. Once someone proposes a solution (and they're generally quick), I can do the legwork of getting it in and adapting it to my specific needs.

It helps the programmer community. If you fully participate in SO (ie - you accept answers, vote on questions and answers, and submit your own answers), then you're helping any other poor sap that might come along after stumbling over the issue you once had, yourself (after all, if you had an issue, someone else is bound to have had the same, or something close enough to apply). At the very least it gets more info out there. Even if you never hear feedback from these people, remember the ""silent majority"" that come across these resources, but don't make themselves known, even if you did help them.

","19699","","","","","2011-07-15 16:28:54","","","","3","","","","CC BY-SA 3.0" "386277","2","","386273","2019-01-29 13:18:48","","8","","

Quality gates – checks that must pass before some changes can be merged – are a useful way to detect quality problems early. These gates can include any kinds of quality checks, including running test suites or using static analysis tools. The idea is that finding and fixing problems early is much easier (and therefore cheaper) than having to debug changes later in your software development process.

Such quality checks might be part of a developer's personal workflow, but it's best to not count on that. In a pull-request based workflow, it is useful to apply a quality gate before a pull request with some feature can be merged into a shared branch that will then be used as a basis of further development. Such quality checks can then be run by an external CI server, not just on the developer's local computer.

Opinions diverge on whether the CI result should merely be informational or whether any detected problems must be fixed. A team will typically adapt the configuration of the CI system over time so that unnecessary warnings are silenced and relevant problems are treated as hard errors. For milder problems, more intricate metrics might be used for quality gating. For example, a change may not increase the absolute number of warnings, or may not add warnings on lines that were changed. Of course, a list of warnings is much more helpful when it is small and actionable, so often quality gating on static analysis results requires a high quality level in the first place.

In case hard quality gates are not desirable, a milder version is to run any check after the changes have been merged. However, this makes it easy to ignore these problems and let issues pile up. Quality gates require the issue to be fixed before the change can pass the quality gate, so that helps to keep bad code out of the system in the first place.

Using quality gates with static analysis for pull requests is effectively standard in open source projects. Contributors might have a variety of experience levels, so automated checks can help them fix these problems before someone spends time on a code review. In a closed team with lots of experience, that is less important because there will be fewer problems in the first place – but most teams do not have uniformly high experience, so automating some aspects of code reviews through quality gating can save a lot of time.

","60357","","","","","2019-01-29 13:18:48","","","","0","","","","CC BY-SA 4.0" "93714","2","","93711","2011-07-17 18:17:51","","13","","

Absolutely.

The whole point of scrum is to get the product owners feedback at the end of every sprint to make sure you are building the correct things.

The sales manager should not be interrupting you in the middle of a sprint. But when you do the presentation at the end he should be there and you should be writing defects for each and every one of his complaints (he is acting as the voice of the user).

Then when you have your sprint planning meetings these defects should get prioritized with the rest of the product backlog and brought into the sprint backlog as deemed fit. Note the Sales Manager should also be part of the product backlog prioritization (at the product owners discretion). BUT not part of the sprint planning.

Engineers are notoriously bad at implementing interfaces for users (you think differently to your user). The sales team has to sell the product and presumably he knows what sells. His advice on UI layout (how it looks not how it is implemented) should probably be considered better than yours unless you have dedicated UI expert on the team or you are copying some other common interface that is well known and used (in this case you better be able to show it in use).

Note: This is also a useful way to stop repeatedly updating the same thing. Make a backlog item for it. make sure the product owner has seen it if they disagree or think the sales manager is wrong that should be noted in the backlog item and the item closed. If the sales manager now brings it up again you can point him to the backlog item saying this subject has already been discussed and decided.

","12917","","-1","","2020-06-16 10:01:49","2011-07-17 18:38:04","","","","3","","","","CC BY-SA 3.0" "176157","2","","176153","2012-11-17 01:25:43","","16","","

One of the best things to me about being a developer is that every day is a learning process. There will always be someone out there who doesn't know something which you do, and there will always be someone who knows something which you don't. I certainly wouldn't consider myself to be anywhere but at an entry/junior level, but I appreciate any criticism I can get as long as it is both justified and given with respect.

An analogy that might be fitting relates to a time period in which I was a writing tutor at a university, as well as when I took part in creative writing courses. Writing code, for all intents and purposes, is much like writing a poem, essay, short story, or novel. Each individual has their own way of going about doing it, but at the same time, even the best writers (or, in this case, developers) make mistakes or let things slip through the cracks. We can often fail to notice these things because we become so used to our own voice (or again, in this case, style of code).

Much like in any field, there are those who are considered to be experts. If those people didn't exist, we wouldn't have anyone to learn from. Assuming this individual in question is truly an expert, I would listen to what he says and ask what he would suggest you do to improve upon your code. Never forget, though, that he is not the only one who can give his assistance; we have the good fortune that a plethora of resources such as SE/SO exist.

","63286","","","","","2012-11-17 01:25:43","","","","3","","","2012-11-17 09:46:21","CC BY-SA 3.0" "93819","2","","44150","2011-07-18 06:34:49","","2","","

It's not about the language, it is about the process

The benefits of pair programming are that you have (imho):

  • instant review
  • a fast knowledge transfer
  • fewer hidden disagreements ('awww, comeon, curly braces on the same line?!'),
  • and more (meaningful) communication.

All these points target the way you work and how the team interacts.

Which may be why you perceive the ruby culture as more prone to pair programming: the community has some very strong voices talking about teams.

","22760","","6586","","2011-07-18 13:53:44","2011-07-18 13:53:44","","","","0","","","","CC BY-SA 3.0" "283397","1","283460","","2015-05-09 18:08:40","","1","149","

Disclaimer: If you are not terribly interested in numerics and mathematical processes, this is most likely nothing for you.

I am currently a bit stuck in a development process of a private project I followed for a long time and I think I need some input because I cannot decide how to do it best. Unfortunately I must explain the problem a bit.

EDIT: This is NOT about rounding implementation. This is already done, both for float/doubles and arbitrary precision. You can assume I have enough knowledge about numerics, rounding modes and floating-point problems, the problem is how to design rounding operators in a stack-based program.

I am interested in the reproducibility problem in numerical computations. Due to the immense power (billions of operations per second) and certain problems by the languages used for number crunching (C and FORTRAN), people cannot check what the machine is actually doing in contrast what the machine is supposedly doing (Deteoriation of language standards like indifference to qNaNs and sNaNs, allow to ""optimize"" a+(b+c) == (a+b)+c, x86 vs MME floating-point, silent FMA support etc. etc.). This view is shared by William Kahan, the designer of the x86 floating-point architecture.

So I worked on a stack-based language like FORTH which allows reproducible results. You have types: aggregates (Vector,Matrix,Tensor) and numbers (complex, reals). You put two space instances on the stack, execute an ADD, and you get the resulting sum. If the spaces were two floats, you get a float. If the spaces were two doubles, you get a double. If the two spaces were arbitrary precision numbers, you get an arbitrary precision number. The same with vectors and matrices.

My aching problem is rounding. If we exclude exact rationals, after division every higher operation (algebraic and transcendental operations) requires rounding. There are two rounding operations which are already implemented: place rounding, setting the exact position of the rounding (rounding down: 2.345 using 0 => 2 / 2.345 using -1 => 2.3) and significant digits rounding by setting the length of the result (rounding down: 2.345 using 1 = 2 / 2.345 using 2 = 2.3).

The approaches:

  • One ring to rule them all: Only one rounding setting for all available types. Problem: Some operations are exact and should be executed as such; arbitrary precision numbers offer exact addition and exact multiplication. If I define e.g. ADD_EXACT I include a keyword which is not implemented by most datatypes and will be therefore a normal ADD and the danger is that I (or others) forget to use ADD_EXACT when necessary. Another problem is that it is makes sense to use different rounding modes for some operations: place rounding for addition and significant digit rounding for multiplication. I fear to swamp the program with unnecessary switches to set the rounding mode.

  • Perhaps some more rings....: Several settable rounding modes. Allows adding and multiplication independently. Problem: Much more possibilities (set and retrieve rounding mode, rounding operation and rounding parameter). How many choices do I offer ? Only two and risking that if e.g. FMA cannot be supported or many more and risking to overwhelm the program with much too much rounding bureaucracy ?

Rounding is a crucial operation and once implemented, it cannot be changed easily. This must be solved as good as possible.

What I need is a simple and elegant model and I fear that I have worked so long on it that I simply do not see it and/or need a plan to find out the right solution. Help is very appreciated and I will update this post to clarify some aspects I may have forgotten.

EDIT: Performance: Is not the design goal. It does not help you to get an incredibly fast wrong answer. It does not help you if two programmers on different machines get a different or (even similar) result if noone knows what is the right answer (or that even both are wrong). The goal of the program is exactly that: Find out when IEEE 754R precision will fail and find indicators and methods to validate numeric results. If you know that double precision and the tool will get you to the solution, by all means use C and FORTRAN.

","179423","","179423","","2015-05-09 19:10:17","2015-05-10 21:46:18","How to implement rounding in an all-purpose stack language using different types?","","2","2","","","","CC BY-SA 3.0" "94707","2","","94489","2011-07-21 04:11:00","","5","","

In order to properly answer this question you first need to decide: What does ""delete"" mean in the context of this system/application?

To answer that question, you need to answer yet another question: Why are records being deleted?

There are a number of good reasons why a user might need to delete data. Usually I find that there is exactly one reason (per table) why a delete might be necessary. Some examples are:

  • To reclaim disk space;
  • Hard-deletion required as per retention/privacy policy;
  • Corrupted/hopelessly incorrect data, easier to delete and regenerate than to repair.
  • The majority of rows are deleted, e.g. a log table limited to X records/days.

There are also some very poor reasons for hard-deletion (more on the reasons for these later):

  • To correct a minor error. This usually underscores developer laziness and a hostile UI.
  • To ""void"" a transaction (e.g. invoice that should never have been billed).
  • Because you can.

Why, you ask, is it really such a big deal? What's wrong with good ole' DELETE?

  • In any system even remotely tied to money, hard-deletion violates all sorts of accounting expectations, even if moved to an archive/tombstone table. The correct way to handle this is a retroactive event.
  • Archive tables have a tendency to diverge from the live schema. If you forget about even one newly-added column or cascade, you've just lost that data permanently.
  • Hard deletion can be a very expensive operation, especially with cascades. A lot of people don't realize that cascading more than one level (or in some cases any cascading, depending on DBMS) will result in record-level operations instead of set operations.
  • Repeated, frequent hard deletion speeds up the process of index fragmentation.

So, soft delete is better, right? No, not really:

  • Setting up cascades becomes extremely difficult. You almost always end up with what appear to the client as orphaned rows.
  • You only get to track one deletion. What if the row is deleted and undeleted multiple times?
  • Read performance suffers, although this can be mitigated somewhat with partitioning, views, and/or filtered indexes.
  • As hinted at earlier, it may actually be illegal in some scenarios/jurisdictions.

The truth is that both of these approaches are wrong. Deleting is wrong. If you're actually asking this question then it means you're modelling the current state instead of the transactions. This is a bad, bad practice in database-land.

Udi Dahan wrote about this in Don't Delete - Just Don't. There is always some sort of task, transaction, activity, or (my preferred term) event which actually represents the ""delete"". It's OK if you subsequently want to denormalize into a ""current state"" table for performance, but do that after you've nailed down the transactional model, not before.

In this case you have ""users"". Users are essentially customers. Customers have a business relationship with you. That relationship does not simply vanish into thin air because they canceled their account. What's really happening is:

  • Customer creates account
  • Customer cancels account
  • Customer renews account
  • Customer cancels account
  • ...

In every case, it's the same customer, and possibly the same account (i.e. each account renewal is a new service agreement). So why are you deleting rows? This is very easy to model:

+-----------+       +-------------+       +-----------------+
| Account   | --->* | Agreement   | --->* | AgreementStatus |
+-----------+       +-------------+       +----------------+
| Id        |       | Id          |       | AgreementId     |
| Name      |       | AccountId   |       | EffectiveDate   |
| Email     |       | ...         |       | StatusCode      |
+-----------+       +-------------+       +-----------------+

That's it. That's all there is to it. You never need to delete anything. The above is a fairly common design that accommodates a good degree of flexibility but you can simplify it a little; you might decide that you don't need the ""Agreement"" level and just have ""Account"" go to an ""AccountStatus"" table.

If a frequent need in your application is to get a list of active agreements/accounts then it's a (slightly) tricky query, but that's what views are for:

CREATE VIEW ActiveAgreements AS
SELECT agg.Id, agg.AccountId, acc.Name, acc.Email, s.EffectiveDate, ...
FROM AgreementStatus s
INNER JOIN Agreement agg
    ON agg.Id = s.AgreementId
INNER JOIN Account acc
    ON acc.Id = agg.AccountId
WHERE s.StatusCode = 'ACTIVE'
AND NOT EXISTS
(
    SELECT 1
    FROM AgreementStatus so
    WHERE so.AgreementId = s.AgreementId
    AND so.EffectiveDate > s.EffectiveDate
)

And you're done. Now you have something with all of the benefits of soft-deletes but none of the drawbacks:

  • Orphaned records are a non-issue because all records are visible at all times; you just select from a different view whenever necessary.
  • ""Deleting"" is usually an incredibly cheap operation - just inserting one row into an event table.
  • There is never any chance of losing any history, ever, no matter how badly you screw up.
  • You can still hard-delete an account if you need to (e.g. for privacy reasons), and be comfortable with the knowledge that the deletion will happen cleanly and not interfere with any other part of the app/database.

The only issue left to tackle is the performance issue. In many cases it actually turns out to be a non-issue because of the clustered index on AgreementStatus (AgreementId, EffectiveDate) - there's very little I/O seeking going on there. But if it is ever an issue, there are ways to solve that, using triggers, indexed/materialized views, application-level events, etc.

Don't worry about performance too early though - it's more important to get the design right, and ""right"" in this case means using the database the way a database is meant to be used, as a transactional system.

","3249","","-1","","2017-05-23 11:33:36","2011-07-21 04:11:00","","","","0","","","","CC BY-SA 3.0" "177174","2","","177167","2012-11-26 08:02:46","","10","","

When it feels someone is ""somewhat difficult to manage"" like you describe, to better understand how does one perform and whether there are issues (objective or subjective) impacting productivity of dev team members, consider establishing a practice of regular 1:1's with your team members, as presented in an excellent article The Update, The Vent, and The Disaster:

...I’m not suggesting that every 1:1 is a tortuous affair to discover deeply hidden emergent disasters, but you do want to create a weekly place where dissatisfaction might quietly appear. A 1:1 is your chance to perform weekly preventive maintenance while also understanding the health of your team.

...The sound that surrounds successful regimen of 1:1s is silence. All of the listening, questioning, and discussion that happens during a 1:1 is managerial preventative maintenance. You’ll see when interest in a project begins to wane and take action before it becomes job dissatisfaction. You’ll hear about tension between two employees and moderate a discussion before it becomes a yelling match in a meeting. Your reward for a culture of healthy 1:1s is a distinct lack of drama.


A very strong point of mentioned article is that it is self-contained, in the sense that besides explaining benefits, it also provides a set of practical recommendations basically allowing one to start practicing regular 1:1's without digging into other sources (although looking for additional information won't hurt, you know).

","31260","","31260","","2012-11-26 08:29:40","2012-11-26 08:29:40","","","","3","","","","CC BY-SA 3.0" "177197","2","","177167","2012-11-26 11:33:38","","49","","

This should be a surprisingly easy problem to solve.

Have a second meeting with him. Tell him that you accept that it's probably your perception of reality that is at fault. Then qualify that with ""however, if that is the case then we need to work together to improve my perception."" Finally challenge him to solve that problem, so he doesn't feel micro-managed.

This exact thing happened to me a long time ago. For me, the issue was that I dislike the possibility that anyone might think I'm seeking extra credit for simply doing my job. And that was fair enough, but there has to be a regular feedback loop between any member of staff and their line-manager.

If there isn't then you get these problems.

Regular, planned, 1:1s are a great idea. And, as people have pointed out, standups do not need to be orthogonal to working from home. But they must involve the three questions: What did you do yesterday? What are you planning to do today? And the one most people forget ... What (if anything) is holding you up?

That said, you should try to discourage situations where team members never work together. I've worked in that situation before and it seeded distrust within the team and outside it. Have a regular day that you all come into the office. Have a regular meeting where people can voice some ideas on improving processes or whatever.

Don't make it a line-reporting event. Make it an opportunity to just talk. You'll be surprised what you learn. If possible, turn that into a social event, go for a couple of drinks on work time as a bonding exercise.

","12828","","12828","","2012-11-26 11:40:42","2012-11-26 11:40:42","","","","5","","","","CC BY-SA 3.0" "95065","1","95072","","2011-07-22 06:54:12","","31","2500","

Our project head is a genius software architect, a gentle and considerate person in general, a geek by nature and delicate by voice. But, at times, we (my teammates and I) differ in opinions -- especially of software architecture issues, system design issues, UI issues, etc., with our leader.

When and how (if ever) should we express the difference in opinions?

","31560","","1204","","2013-03-16 20:11:28","2013-03-16 20:11:28","When to confront a good project leader or boss","","14","8","4","","","CC BY-SA 3.0" "177306","2","","177299","2012-11-27 10:27:55","","3","","

Both for ethical/ideological reasons and for technical/sociological/practical ones, trying to measure the performed work is a very bad idea. In Italy, where I live, it is even against the law (at least in the specific case of a cooperative). I strongly suggest you to just split even the cooperative global income, no matter how much any individual programmer has actually worked.

The typical legal structure of a cooperative provides many others managing/political tools to deal with lazy people or other problems. You do not need this.

You are right when you are looking at SE and other community-driven projects. In these projects, nobody tries to measure the work perfomed by members. There is a reason for such a lack of control. This is one of the reasons of the success of many open source projects.

Highly-skilled people, like programmers, want acknowledgment much more than anything else. You cannot measure deserved acknowledgment by hours worked or by written LoCs. Only the rest of the community (the rest of the cooperative members) can give acknowledgment to a programmer. They do it through the typical tools of any democracy: personal judgement, vote, silent recognition, given reputation and so on. Rely on these.

","73274","","","","","2012-11-27 10:27:55","","","","0","","","","CC BY-SA 3.0" "95668","2","","95637","2011-07-25 07:45:45","","29","","

(warning, long post, only partially on topic)

Well I have been asking the same thing for ages. About 6 years ago I was trying to get recruiters to understand what we were about (they just ticked the boxes as you say).

At the time I wrote:

Do you geek like we do? (Open letter to recruiters and candidates).

Our culture is all important to us, I am not talking about race here, it is background based, how you view your job, what you intend to get out of your job, how you approach your job and dealing with others.

I have been mistaken before for meaning race so I will clarify now, this isn’t a race based thing, it is a mindset and drive thing. We have worked with people from many races that have been great. We also know many who are plainly and simply useless. So race doesn’t define what we are looking for at all, it is a “cultural” fit.

There are many sub cultures within Australia most of whom you wouldn’t pair up together, I am trying to explain ours - The Geek.

  • Many people need explicit instructions: ""A>B>C>D"" others you give them A and some background and they will work out B>C>D and E all on their own. We are looking for the second group.
  • People will simply agree with you because you are “senior” to them. Others will voice their opinions and contribute their ideas. We want the second one. Sub to this is if the decision goes against them they will still throw themselves into it.
  • Some people have learnt by rote: You do A then B then C which gives you X. Others have learnt how to learn and think. See beyond the immediate and solve the underlying problem.

Many of our jobs over the past 14 years have come from our clients need to cleanup and finish projects that have failed, mainly because the company has hired the wrong type of staff ... it costs far more than simply their wage if you get it wrong.

Now trying to pick these types of people we mean when we say “like us”:

  • Good inventors, great ideas, terrible and finishing off a project. This is describing myself. Need to hire people to cater for this problem.
  • Fantastic optimisers and “do”ers, if you want it to work really well get them. Flip side is narrowing focused and take a long time to get it there. Generally good techie trait but usually can’t converse with the outside world.
  • Very good at and knows “the correct way” and “end to end” work. They can see a project from start to finish and not miss stuff. “because it should be done that way”. This is an attitude we have in here; the clients know this and pay for it. Combine this with the “do”ers and they are ideal.
  • Quickest path to the immediate result. Tell everybody about it, loudly, bit hap hazard. (Don’t care get it working). Good for a start up, bad for established business that needs consistency. In a pure Support / Maintenance role this is good provided other developers are cleaning up afterwards. Prototyping and proof of concept work this is great.
  • Generally interested. What ever is going … tell us about it, what can I do, how can I add my value to it either as knowledge or sweat (getting on with something they see as required).
  • Rote learners / process workers. Where project has been planned out to the endth degree and they have “their bit” to do and that is it. Are good in very large teams. There is no danger of “tangents” being taken and unexpected results out of 1/200 people. They expect to be handed their “what to do” list and then they do that and come back for the next bit. Many cultures (both race and schooling) around the world tend toward rote learners or Boss/Underling style workers. This style of person is useless to us, send them to larger corporates.
  • Our people are equals in a team, expected to work within the team to achieve the goals set by the client.
  • You do whatever is required to land the job.
  • You give you opinions and perspective without attachment.
  • You think things through and analyse boundary cases.

Language is a barrier to working with us. We pretty much have our own language in here, you at least need English and some technical skill combined with a sense of humour.

If you don’t understand us you won’t grasp the requirements of what you need to do or how the rest of us will go about implementing the solution ... you won't last.

Why would you want to work with us?

  • You get paid. Alright its not the same as you would earn out in the ""real world"" but its good money.
  • You get to participate in decisions. While the directors have final say we want to hear from all, what they think, how and why the think it. It all helps.
  • You get to research your own stuff. Interested in geek stuff, coding, new products, latest MS vs Linux war developments, Design techniques. All these things you are given time every week to research and discover what you want to. You just have to share it with everyone else.
  • You get to try out new technologies. Either through research or through new projects we want to try new things and design new things. The projects are there to allow us to do so. (provided it helps the client and doesn't cost more than the project to do so)
  • You aren't required to wear suits. Unless the situation requires it, like visiting clients or events.
  • We want you to learn more and will put you through targeted training to improve what you know.
  • You aren't usually requried to run 9-5. If you are running support for an agreement that is 9-5 then you do, otherwise get the job done and don't abuse the priviledge.
  • Great team to work with. Well we think so anyway, we laugh at each other jokes out of politeness and have a no stabbing in the back policy. 
  • We are geeks as well. Some of us have girl friends and kids but don't let that fool you.
  • We enjoy the respect of some very big companies and can walk in without question.
  • Our client base is spread around Australia and across the globe. Leaves a lot of scope for travel and
  • We build very good relationships with our clients and their employees which means we have lots of people we can go drinking with.
  • If you have a need or problem we don't mind you taking the time off to sort it out. So long as you make up the difference with a few extra hours later on.
  • Your ideas are valued and you get to see a greater reward for those ideas.
  • You share in the success of Redgum.

Now, do you still want to work for us? Why?

Conclusion

I wrote that in 2004/05, I have done some 50 or 60 interviews myself, worked with 14 or so recruitement agencies who threw anyone who ticked the boxes at me ... most of this was a waste of time and I suck at picking people from an interview.

So far the most success I have had is in finding one single recruiter who understood the meaning behind the above and what I was looking for and could filter down the list to people who fitted.

Now I have 1 recruiter who I trust knows my business, knows my needs, we have lunch every other month to catch up ... I let him go, give him the time and trust that he will only show me appropriate candidates.

Recruitment is a specilist area, and while at the end of the day you have final say ... if you have the money, let the people with skillset do their thing.

Once they have found someone, I interview them, ask them about their experience, their interests, the things that motivate them, the coolest projects they have done, hear their answer to the above ... once I am convinced I bring them in for a second interview with the team over lunch, everyone else in the team asks them questions and lets me know the thumbs up or down ... then we hire.

","3490","","","","","2011-07-25 07:45:45","","","","4","","","2011-07-25 15:59:16","CC BY-SA 3.0" "177675","1","177720","","2012-11-29 14:23:45","","5","341","

I've seen countless times the following approach suggested for ""taking in a collection of objects and doing X if X is a Y and ignoring the object otherwise""

def quackAllDucks(ducks):
  for duck in ducks:
    try:
      duck.quack(""QUACK"")
    except AttributeError:
      #Not a duck, can't quack, don't worry about it
      pass

The alternative implementation below always gets flak for the performance hit caused by type checking

def quackAllDucks(ducks):
  for duck in ducks:
    if hasattr(duck,""quack""):
      duck.quack(""QUACK"")

However, it seems to me that in 99% of scenarios you would want to use the second solution because of the following:

  • If the user gets the parameters wrong then they will not be treated like a duck and there will be no indication. A lot of time will be wasted debugging why there is no quacking going on until the user finally realizes his silly mistake. The second solution would throw a stack trace as soon the user tried to quack.
  • If the user has any bugs in their quack() method which cause an AttributeError then those bugs will be silently swallowed. Once again time will be wasted digging for the bug when the second solution would simply give a stack trace showing the immediate issue.

In fact, it seems to me that the only time you would ever want to use the first method is when:

  • The block of code in question is in an extremely performance critical section of your application. Following the principal of ""avoid premature optimization"" you would only realize this of course, after you had implemented the safer approach and found it to be a bottleneck.
  • There are many types of quacking objects out there and you are only interested in quacking objects that quack with a very specific set of arguments (this seems to be a very rare case to me).

Given this, why is it that so many people prefer the first approach over the second approach? What is it that I am missing?

Also, I realize there are other solutions (such as using abcs) but these are the two solutions I seem to see most often for the basic case.

","17157","","","","","2012-11-29 19:50:21","Questioning pythonic type checking","","2","5","","","","CC BY-SA 3.0" "95741","2","","95637","2011-07-25 14:55:51","","1","","

Before you can hire passionate programmers you need to determine what you mean by that.

When I look for passion in programmers it has to do with the enthusiasm in thier voice as they discuss a difficult work problem thy had to solve. It has to do with being passionate enough to get some depth of knowledge and stepping up to solve the hard problems. What is has nothing to do with is whether they program outside of work or can name three famous programmers from the past by looking at their pictures.

When interviewing you can hear passion in the way they answer questions. They go into greater depth than the non-passionate people and they tend to be enthusisatic in what they say. They understand the business domain they have been programming in and are able to talk about how they solve problems and what suggestions they have made in their jobs to improve the programming processes or design of the application. They talk about refactoring and design patterns without being asked specifically about them.

When they talk about their achievements, they talk about things that go beyond basic coding of a module. They talk about how they saw a problem in the design and refactored or they talk about how they found a new technique to use to solve a difficult problem and they talk with enthusiasm. A passionate person is difficult to shut up. They really want to describe their achievements and goals for the future. They may have things they specifically would like to work on that your job offers and their current one doesn't. They show a pattern of growth in skill and complexity of what they do.

","1093","","","","","2011-07-25 14:55:51","","","","0","","","2011-07-25 15:59:16","CC BY-SA 3.0" "96425","2","","96331","2011-07-27 17:30:31","","122","","

Such a good question because it is a problem we all face as freelancers. When I made the transition to being a freelancer, the hardest thing for me to develop was a time tracking discipline. For the first year or so, I just focused on project-oriented work, and really only bothered with timers when I was ""in the zone"" of coding. In time I learned what a huge disservice to myself it was for me not to track as much of my day as possible.

Even as I write this comment, I have a timer running entitled, ""Blogging on Stack Exchange."" But more on that in a second. First let's address your question.

As it relates to time tracking, one of the things I found as a freelancer is that there were certain clients who tended to have lots of little issues. As an amateur, and because I felt I was being a ""good guy,"" most of the time I wouldn't even bother billing the client. Taking two minutes to fix a problem, which sometimes is all it takes, seems hardly worth the effort to start a timer. What I found however, is that over the course of a month, it was not just one 2 minute problem, it was 10 or 20 two minute problems. Taken by themselves, it was no big deal. Taken in aggregate I was leaving money on the table. But more than that, the client had no visibility into the quantity of work I was doing for them. As a result they tended to either a) undervalue my work, b) take advantage of me, or c) just take me for granted.

This is not a good relationship to have with anyone, especially a client.

Next, and as someone else pointed out. Nothing really takes two minutes. There is email, the phone call, the logging into the bug tracking system, and all of the other artifacts of a good process. The process, the customer service in speaking with the client on the phone, is all part of the value you provide, and thus should be something you are compensated for. And clients should know how much time is spent by you on the phone and answering email. There was one time I presented an invoice to a client that showed how much time was spent on the phone with them. They later told me that they had no idea, and that they worked to curb their tendency to default to calling me on the phone when they had a question. A fact I appreciated given how disruptive a phone call can sometimes be.

I also agree that you should bill in reasonable increments. I bill in 15 minutes increments, which is just a fancy way of saying, ""I have a 15 minute minimum on any issue you want me to tackle."" There are many reasons for this, but for me, the biggest reason is the hidden cost of context switching. For me to go from one task to another is not instantaneous. If only it were. Moving from one task to another often can involve me stopped to check email, go to the bathroom, look at G+/Facebook/Twitter, etc. One could say that I lack discipline, but for me this is integral to the process of me switching gears. Therefore, if I have 4 tasks on my plate that each take 15 minutes each, it doesn't take me an hour to complete them, it takes me about 1.5 hours. And that additional 30 minutes, is the hidden cost of context switching. And my clients pay for that through my minimum billable increments.

Many people have also mentioned and talked about the additional value you provide as a more experienced programmer. That fact that it takes you half as much time to perform the same task as a colleague is reflective not only of your superior experience, but also of a better process you have built for yourself in managing your clients. This all speaks directly to the value you provide and you should compensate yourself fairly for it. This requires you to understand what your competitors are charging relative to the quality of their work. Personally, I maintain close relationships and friendships with the other freelancers in my field, which gives me insight into this problem and allows me to adjust my rates accordingly. If you find that by and large you produce the same quality work in less time, then by all means charge more. If your clients can't afford it, then look for new clients and move up in the world. Leave the penny pinching clients, and the clients who don't value work provided to them by their freelancers to smaller fish. Refer those clients to other freelancers you trust and make them someone else's problem while you work on building up a clientele that pays you more fairly.

The last thing I wanted to share was something no one else really touched upon that I could see. Sometimes comping the client for the 2 minutes of work is the right thing to do from a client management perspective. Sometimes, giving them that time is what helps you build trust with the client, and firmly establishes you as the go-to person for them. It might also help you secure larger and more profitable projects in the future. Knowing when to charge and more importantly when not to charge is the hard part. But when I make the decision not to charge a client, I do go out of my way to tactfully tell them that this is ""on the house."" I tell them that I appreciate all the business they send my way, and that I don't mind taking care of this one issue for them. Its the least I can do, I tell them. They are usually very appreciative, and I feel it helps strengthen our relationship.

Now permit me to return to the timer currently running on my desktop entitled ""Blogging at StackExchange."" This is not directly related to your question, but helps underscore the importance of maintaining a discipline with keeping accurate track of your time.

From a business perspective, the most important metric you can track is profitability. Knowing how much time is spent doing billable vs. non-billable work is very important. It helps you establish and understand how much overhead exists in running and maintaining your business. It also helps you to identify ways in your business and process you can improve. If you realize at the end of the quarter that you spent a lot more time than you thought ""blogging at Stack Exchange"" and it came at the expense of actual billable work, then you might want to consider spending less time doing it. With regards to profitability though, what I find is that there is A LOT more time that goes into a project than the time that is spent coding. Not only is there all the email, and other tasks mentioned before, but their is the time spent securing the deal, billing the client, negotiating contracts, and the like. Much of this time is not billable, but knowing how much time you spend doing this might help you identify ways to streamline your business, and increase profitability at the same time. Let's say for example you charge $100 per hour, but that you spend roughly 50% of your time doing administrative non-billable work. Perhaps there is a person out there you could hire at a rate of $50/hour to take that administrative work off your hands. Then you could spend more time coding, AND increase your bottom line at the same time. Its a win-win. You are giving someone else valuable work, you provide a better service to your clients almost certainly, AND you make more money.

And there you go, 0.79 hours spent ""Blogging at Stack Exchange."" I will chalk that up to my marketing budget. :)

","30573","","30573","","2011-08-01 05:18:08","2011-08-01 05:18:08","","","","4","","","2011-07-28 14:25:33","CC BY-SA 3.0" "285170","2","","285165","2015-05-28 11:23:29","","1","","

I think it depends on a lot of social factors. For one, does it even affect you or your position if you do not adhere to the style guide? After all, a lot of code has been written by others with this guide in place already. If it does not really affect you, do you (for whatever reason) want to champion the style guide?

Most of the times, I found that once a style guide was ignored by one or more teams without serious repercussions, then the guide may as well be removed. It simply is not sufficient to write a document that explains the style guide, one needs to enforce it as well.

Hence, a more fundamental issue is present here: Does your company's style guide still have supporters, and why did they not enforce it?

You can think of it the other way round: You wouldn't have even bothered with this question, if your boss told you that must write any and all code for all projects in correspondence with this style guide. Apparently though, you have doubts as to the applicability of the guide instead.

In summary, what can you do? Given the above, these are a few possible ways, but in the end it is up to you to choose:

  1. Ignore the style guide completely, because everyone else does so as well, and matter-of-factly it is no longer relevant.

  2. Silently apply the style guide to your modifications and if someone complains simply point to the guide. You're not doing anything wrong, but you're also not doing much more than the minimum.

  3. Champion the style guide. Bring up the problem that this code has ignored the guide, and that this fact in itself, is a problem that should be addressed. Get your voice heard and be part of the group that actually enforces the style guide, because you think it is important.

  4. Challenge the style guide. Almost as above, but you don't agree with the guide and want to get rid of it.

","16375","","","","","2015-05-28 11:23:29","","","","2","","","","CC BY-SA 3.0" "388746","2","","388741","2019-03-16 18:43:05","","2","","

The voice field is a member of the AbstractAnimal class; if you access it in the Dog class using the this pointer, then, as far as the subclass is concerned, voice is part of the base class' public interface towards its subclasses. This means that there's extra coupling between the two classes, as both AbstractAnimal and Dog now rely on voice being present in the superclass, and on voice supporting particular operations. Now, this is mitigated by the fact that JavaScript supports duck-typing, but nevertheless, these are things that you as a programmer have to consider.

This coupling means that it will be difficult to change the internal structure and implementation of AbstractAnimal (e.g., remove voice, or represent it in some other way, and/or change some methods that use it) without affecting existing subclasses. This is all assuming that there's actually some role voice plays in the superclass, some behavior implemented in AbstractAnimal itself; if that's not the case, consider if AbstractAnimal should have the voice field in it at all.

Initializing superclass members via the constructor, and treating the voice field as private to the superclass, hides the internal details behind the interface of AbstractAnimal. This is generally a good idea, but again, you'll have to decide if decoupling is worth the hassle for you. If it is, you would design your classes so that each can do its job and collaborate with the other without having to rely on the other's internals. In statically typed languages, you can enforce this to some extent; here, it requires developer discipline and, in a team setting, clear communication among the team members.

Another thing you can do is to have a narrow interface for the client code (code outside of the inheritance hierarchy) and a wider interface for the subclasses (a few extra methods and maybe fields that subclasses are allowed to use, but that are not to be used from the outside). This should be somehow documented and communicated to the readers of the code. In other languages, you would do this by using the public and protected access modifiers, in ES you'd rely on naming conventions (_name) and/or certain tricks (e.g. leverage closures).

the second option will bubble everything up to one place, but than you could end up with huge constructors taking a lot of arguments.

Well, don't end up with such constructors; don't design your classes in such a way. If you have too many arguments, examine your class design more closely. Maybe your classes really do need a bunch of parameters, in which case you can bundle them in a parameter object, but more likely, your classes are trying to do too many things at once (they handle too many responsibilities), so you should break them up.

Finally, a superclass should be a behavioral abstraction of the subclasses (in the sense of LSP), you should generally avoid inheritance just so that you could inherit data members and some useful functions, although this can be handy. In the famous GoF book there's a famous line:

Favor object composition over class inheritance.

They state this as one of the principles of OO design; composition is a more flexible alternative, and designs can often be made simpler with it. They acknowledge that this is not always achievable in practice, but that more often then not, programmers overuse inheritance.

So, generally speaking, reserve inheritance for creating behavioral abstractions and subsystem facades to program against, and if you need to compose objects out of smaller parts, use traditional composition, or mixins.

","275536","","275536","","2019-03-16 18:51:08","2019-03-16 18:51:08","","","","0","","","","CC BY-SA 4.0" "285739","2","","285677","2015-06-03 18:03:16","","12","","

The real reason is a lack of need for it. Layering databases on top of files, rather than merging them, handles the vast majority of situations at least as well as a merged solution with substantially reduced complexity. In some situations others have mentioned, we've also layered parts of files on top of databases (such as permissions structures). In that case, the database managing those permissions is remarkably simpler than a commercial RDBMS.

There are advantages to merging them, but so far those have been few and far enough between that the movement is growing slowly. Consider how rare it is for people to say ""Give me the 3rd column of every invoice I've received since 2010, and sum them together,"" or ""don't let me delete this file until I've removed it from the Excel spreadsheet also.""

File systems have a few advantages over relational databases that keep them going:

  • They are far simpler. This is a big deal when bootstrapping a computer. Even on Android, where they have an RDBMS for storage, they have plain old images for managing the initial bootloading process.
    • It is easier to define their limitations. In an unlimited machine, RDBMs provide quite a lot of power. However, in the file system world, there are a lot of limitations which stem from trying to be fast when directly layered on top of a spinning disk. It is harder to prove that an RDBMS query does not exceed those limitations than it is to provide the same guarantees for a file system.
  • They handle hierarchical structures better. In many cases, it is still natural for people to store files in a hierarchical form. In RDBMSes, that is a special case. File systems optimize for that special case, RDBMSes do not.
  • Reliability. It is much easier to prove that two layers work independently than to prove that one giant system works perfectly. RAID arrays, fail-safe journals in times of power failures, and other advanced features are easier to implement in a layer below the layer dealing with things like ACID or foreign key constraints.
","120232","","591","","2015-06-04 17:49:37","2015-06-04 17:49:37","","","","2","","","","CC BY-SA 3.0" "98242","2","","83372","2011-08-03 14:17:55","","1","","

The type information is needed of course. The whole point of writing programs is manipulating data structures. Why some people think that throwing crucial information away is a good idea is beyond me.

Have a look at these signatures:

  1. Static typing:

    public List<Invoice> GetInvoices(Customer c, Date d1, Date d2)
    
  2. Poor (also called dynamic) typing:

    public GetInvoices(c, d1, d2)
    

In (1) there is clarity. You know exactly what parameters you need to call the function with and it is clear what the function returns.

In (2) there is only uncertainty. You have no idea what parameters to use and you don't know what the function returns if something at all. You are effectively forced to use an inefficient trial and error approach to programming. You never reach the point where you know what you are doing because crucial information is intentionally hidden. All that in the name of simplicity and speed of development. To add insult to injury, this is prefered by some people for some strange reasons and called ""modern"".

Imagine maintaining a project with hundred thousands or even millions of lines of code written in this ""style"" by multiple programmers...

I don't know why it is called ""dynamic"" at all. It is like writing a Java program where every parameter, return value and local variable can only be of type object. Type declarations would then indeed become redundant. You could then invent a static method to call a method ""dynamically"":

public class DynamicMethodInvoker
{
    // second parameter should be a string, but we are restricted to a single type
    public static object Invoke(object thing, object method, object[] parameters)
    {
        // do some reflection stuff here
    }
}

Suggested reading: Dynamic languages are static languages

","33288","","33288","","2011-08-03 14:27:32","2011-08-03 14:27:32","","","","0","","","","CC BY-SA 3.0" "98623","2","","98048","2011-08-04 14:49:54","","2","","

I couldn't help but add a few government driven ones:

  • The closed source version is certified for compliance (pick your cert); the open source is not. Not always true, but a general trend - when a body of people are working on open source, it's not always so clear who will fund and maintain the typically expensive certification. When I say certification, I mean - FIPS, EAL, and probably many others for other industries

  • Sweet tech support package - more and more big open source offers quite competitive tech support options. But in places where there is no value add to your company's developers becoming good enough at the problem domain that the code solves, then it really is a better trade-off most of the time to be able to get features and bugs managed by the producer of the software. Not true for every situation, but especially in niche areas - there may be open source, but it's less likely it'll have good support

  • Security paranoia and the desire for un-public code. I know a company is unlikely to share its code with the outside world... open source is by it's very nature... open. Yeah, I agree ""security through obscurity"" is a very bad plan, but I can at least understand the paranoia and when this is driven by external customer requirements and those requirements pay your bills - sometimes it isn't worth the battle.

  • Import/export rules - it's not unusual in government work to have rules about code being made by the country or it's allies. With open source, it's hard to say who made it. With a proprietary code base, you know exactly who - they are the guys who sent you the invoice.

","12061","","","","","2011-08-04 14:49:54","","","","0","","","","CC BY-SA 3.0" "98910","2","","98905","2011-08-05 17:11:01","","4","","

I have sat on interviews and noticed a big disparity between individuals of similar competency at answering questions on a whiteboard during an interview. Generally being able to clearly explain your thinking, writing readable code with the dry eraser pen, avoiding long moments of silence tended to result in more favorable reviews of the candidate even though in the end the answers were about equally correct.

I don't remember the last time I worked as an individual when developing software. I always had to coordinate my activities with others, discuss my design and implementation decisions, and work with others to construct software. Demonstrating communication skills in an interview is a huge plus. Interviews can make you nervous, but so can looming deadlines and the pressure of the job.

I would also reiterate my comment. Given the team-oriented nature of software engineering, you have to consider more than technical competence. The ability to speak and write, especially technically, is important for most positions. I would assess the competence of someone on all of the factors relevant to the job, not just their ability to build software.

What are some ways that one can get better at whiteboard interview questions?

Are there ways to be better prepared?

I can think of two reasons why someone might have poor responses to whiteboard questions: they don't have a good grasp of the technical information or they are a poor speaker/presenter. Of course, it could always be both of these.

The way to get better depends on the problem. Technical improvement comes by reading, doing, and asking questions (usually in that order). Poor presentation skills comes through practice, although some people are just naturally good speakers, while others aren't. I think that anyone can develop the communication skills, but personality will play a huge role in how good someone actually is.

Tips for how to proceed during the interview?

It depends.

More detail is always good, even to the point of total ""brain dump"" to the interviewer. If I wasn't giving enough information, I've had interviewers ask me to explain something in more detail, and they typically asked explicit, to-the-point questions about my design or code.

Spending a couple of minutes thinking through the problem before hand, without saying or doing anything is always a good idea. You can use this time to also ask questions to clarify what the interviewer is looking for. This will not only give you the opportunity to give the interviewer exactly what they are looking for, but also show that you can think your way through multiple possibilities.

","4","","","","","2011-08-05 17:11:01","","","","0","","","","CC BY-SA 3.0" "99080","2","","99050","2011-08-06 14:39:32","","2","","

Given you've mentioned Windows Installer and MSI's, I guess you're mostly interested in deployment of client/desktop style apps? Automatically deploying to the build server itself should be relatively straightforward (by triggering a silent install as suggested by other posters) - for testing purposes this would only make sense for server apps (or in a really small team where the tester was using the build server itself).

The best solution I can think of if you want to push the install to testers' own machines is to use group policy deployment on the domain to roll out builds (see http://support.microsoft.com/kb/816102) This will require a manual step to publish the update, which is not ideal for your purposes (if slightly important for network security :-)) It also requires you to have admin privileges on the domain controller, which might be a limiting factor depending on the size/structure of your organisation!

","33475","","","","","2011-08-06 14:39:32","","","","0","","","","CC BY-SA 3.0" "390547","2","","390546","2019-04-17 21:58:34","","3","","

It depends.

If the collection represents a series of independent operations passed together to cut down on chatter, then it can make sense to return a collection of results. Some passed, some didn't.

If the collection represents a series of transactional operations passed together to imply ""please do all of these things"" then you should treat them as a proper transaction - either they all succeed or they all fail.

If the collection represents a set of inputs, then it usually is useful to discard ""bad"" inputs. This helps prevent people from flooding your service with bogus requests. It also allows you to treat ""non-existent"" inputs the same as ""invalid access"" inputs so that you're not leaking data to people without permission to operate on certain inputs. A bunch of invalid inputs is the same as an empty set of inputs, which reduces complexity of error handling.

If the collection represents a cohesive bundle of data, then modifying that bundle is often not a good idea since you're silently changing its meaning. In those cases it's usually better to return a nice single error about the invalid input (unless it's a permissions or existence issue, in which case you want it to look like a nice single error about normal invalid input).

","51654","","","","","2019-04-17 21:58:34","","","","0","","","","CC BY-SA 4.0" "390999","2","","390985","2019-04-26 13:09:36","","5","","

The statements that you claim should be true in Scrum aren't necessarily true in Scrum, according to the Scrum Guide.

It is not true, according to Scrum, that you should be estimating user stories in story points. The Scrum Guide mentions neither user stories nor story points. In Scrum, you have a Product Backlog that contains Product Backlog Items, and one of the attributes of a Product Backlog Item is an estimate. However, there are many ways to estimate, and an estimate doesn't even need to be a numerical value. If you have decomposed all Product Backlog Items to a similar size, that is sufficient. The purpose of the estimate is to enable the Product Owner to be able to understand how much effort is likely to be required to complete work to properly order the Product Backlog and for the Development Team to be able to effectively plan a Sprint by determining what Product Backlog Items can likely be completed in an iteration.

Scrum is silent on if the Development Team should be assigning work or not during the Sprint Planning. The Development Team is self-organizing, and the Scrum Guide does say that it is solely up to the Development Team to determine what they can achieve over the course of the Sprint and how it is done. The requirement is that, by the end of Sprint Planning, the Development Team has done enough work to determine which Product Backlog Items they can likely achieve by the end of the Sprint timebox. Some teams prefer to assign out all of the work, at least tentatively. Other teams prefer to pull the work in as the Sprint progresses. Both are valid, and both have tradeoffs.

In Sprint Planning, you don't need to break down every single Product Backlog Item. You need to do sufficient breakdown of the work to ensure that you can take a subset of the Product Backlog and forecast that you will likely get it done. This work can be done throughout refinement activities that occur (there is no specific Scrum event for refinement) as well as at Sprint Planning. The act of placing estimates on Product Backlog Items is an outcome of refinement, which may be estimating in points, hours, or even no value and simply ensuring a nearly uniform size of Product Backlog Items.

Now, moving on to estimates. There are some good practices for estimating work in software development. One of those practices is not assuming that the work will be done by any particular person. Different people take different amounts of time to achieve something, simply based on their knowledge and the set of skills needed to do the work - not everyone is equal. You shouldn't be estimating work to be done by the expert on the team, since that person may not be the one doing the work. On the other hand, estimating as if the least skilled person is going to do the work would result in overestimating everything. So you need something in the middle. Techniques such as Wideband Delphi and Planning Poker are designed to help with this.

This is also one of the reasons why people tend to avoid estimating in hours. There are two considerations here. First, complexity is not likely to change based on who is doing the work. The work is equally complex, but some people have existing knowledge and skills that allow them to cut through some of the complexity. Second, if you estimate at the team level, complexity is also considering the team's effort. More complex stuff is more likely to include pair programming, more intensive peer reviews, more testing, and so on.

So, my recommendation is two-fold:

  • Use consensus-based estimation techniques and risk management to determine what the right estimate is. Find something between the mid-point and the highest estimate, based on the level of risk that you are willing to accept in the estimate.
  • Consider moving away from time-based estimates toward complexity-based estimates. Or, as an alternative, extremely small units of work that are all roughly equally sized.
","4","","","","","2019-04-26 13:09:36","","","","0","","","","CC BY-SA 4.0" "100357","2","","100348","2011-08-11 14:10:28","","7","","

Lisp is the language you hear when standing close and listening to the voices coming from the ivory towers. Other languages, such as PHP, might not be as elegant or powerfull, but they are like a common tongue, easy and forgiving.

While Lisp has influenced many languages, it never made it to mainstream. Why? Because many developers didn't understand the concepts of the language, to them it seemed rather obscure. Lisp is hard to understand for the vast masses of developers. Have you ever seen a job description requiring Lisp as a programming language? I haven't. ""Why"" you ask? Because it's hard to maintain and read for many people. In Lisp, you can much more often not tell immediatelly what an expression is doing by simply looking at it. It lacks a certain kind of simplicity, that's why it never became common tongue.

Nevertheless, Lisp has had an impact on many languages. I do recommend learning it for academical purposes. It widens your mental borders so you can often think about problems from a different point of view. However, I wouldn't recommend using it for web applications, unless you're carrying out a feasibility study for a university. It's lacking support in tools and libraries compared to the other options. If you want to acquire practical skills that will eventually yield some money and can be presented on your resumee, then by all means pick Python. You'll benefit from Lisp as well but it's less practical and more academic in nature, although your overall programming style might benefit from it.

Also, there's a renaissance of functional languages these days. You could also look into F# for .NET or Scala on the JVM if you want to pick up some functional concepts.

So make your choice. If both were real languages, which of these would you rather learn: Latin/Ancient Greek or French/German/Italian/Spanish/Chinese/Arabic?

","20294","","20294","","2011-08-11 14:43:46","2011-08-11 14:43:46","","","","2","","","","CC BY-SA 3.0" "100876","2","","99996","2011-08-13 13:18:34","","2","","

A genetic algorithm requires some way to reward good genes with greater propagation. If you had no way to tell good genes from bad genes, you couldn't use a genetic algorithm at all.

For a genetic algorithm to work, you must allow the more fit solutions to reproduce in preference to the less fit solutions. Otherwise, you'd just be trying random solutions.

Here's a typical example from my own experience: Developing one of the first voice dialing systems, we had a hard time finding an algorithm to match a spoken name to a stored copy of that same name. We were told that 95% accuracy picking one name out of 25 was sufficient. We had a stored corpus of people saying 25 names 10 times each.

First, we developed an input system that measured the length of the spoken word and the frequency energy in several normalized chunks of it. Then we developed an algorithm that assigned weights to the matches on those parameters and compared two sets of parameters through those weights.

Now, we had one last step -- what should the value of those weights be?

We created 1,000 random sets of weights and tested them against the corpus. We threw away the 500 that performed the worst. For the remaining 500, we duplicated each one and in one of them, randomly raised or lowered one of the weights.

We repeated this process on a computer for about two weeks until it finally had a set of weights that met the 95% accuracy criterion. Then we tested it on data not in the corpus. It was about 92% accurate. So we ran longer to get to 98% accuracy on the corpus and that set of weights produced 95% accuracy on data not in the corpus.

So, the point is, you must have a fitness function to run a genetic algorithm. If you have no way to tell good genes from bad genes, how can you make sure the good genes reproduce and the bad genes don't?

","34200","","","","","2011-08-13 13:18:34","","","","0","","","","CC BY-SA 3.0" "392143","2","","392132","2019-05-20 14:44:59","","6","","

It seems to me that most discussions about exceptions are missing the point.

Most people reiterate ""exceptions should be used only for exceptional circumstances"" and then argue about their differing opinions of what does or does not qualify as ""exceptional circumstances"".

That is a meaningless semantic game. Instead, I propose that the use of exceptions should be based on their nature as language features. Exceptions have three core properties:

  1. They can transfer control flow up the call stack without requiring any changes in the code in between.
  2. They cannot be ignored silently. If there is no code handling them, your application will crash with a nice stack trace telling you what happened. And code that catches and ignores them is at least a very visible red flag.
  3. They syntactically represent alternative results of a method, you can have an arbitrary number of different ones, and catch clauses allow you to handle them selectively.

And therefore exceptions should be used when these properties provide a benefit:

  1. When you want to give the caller freedom to decide the granularity of error handling - i.e. if sometimes it would make sense for a caller to immediately do something specific, but in other circumstances it could be handled by a generic error handler (e.g. one that returns a HTTP 500 response).
  2. When there is a case that callers are likely to ignore and you don't want them to.
  3. When your method has a natural return value, but also special cases where the return value doesn't exist, and they might need to be handled in different places.

Especially points 2 and 3 are a good match for the ""exceptional circumstances"" definition, which is probably where exceptions got their name, but these more specific properties allow for a more concrete reasoning why exceptions should or should not be used in a concrete case.

And to finally answe the question as stated: No, exceptions are useful in many cases where you do not want the program to crash.

","11982","","11982","","2019-05-20 15:30:46","2019-05-20 15:30:46","","","","2","","","","CC BY-SA 4.0" "101849","2","","101836","2011-08-17 19:22:48","","4","","

I believe that there should be a few set points of contact with the customer. In traditional project management, this person is the project manager. In Scrum, you would call this person the Product Owner. In Extreme Programming, it's the customer representative. Note that in Scrum and XP, nothing says that this person has to be from the customer's organization, but only be given the voice of the customer when it comes time to make decisions.

To me, the biggest concern is the number of communication paths. The number of communication paths on your project team is defined as (N × (N-1)) / 2 where N is the number of people on the project team. If everyone on the team has communication access to the customer or client, then N increases by the number of contacts within the customer's organization, greatly increasing the communication paths. It becomes nearly impossible to know who knows what information, who said what when, and keep all communication organized.

My second biggest concern with free communication between your development team and your client is keeping track of who is saying what in terms of the current health and status of the project, meeting estimates on schedule and budget, and so on. Having a primary point of contact ensures that everyone outside the development organization knows exactly what's going on at the appropriate level of granularity, and ensures that each member of the team has everything they need to do the tasks assigned to them.

My manager, and old-school mainframe guy, contends that developers should never ever have ANY client contact. That all interaction with the client should be mitigated by a Project Management layer. He asserts that this allows a coder the focus they need in order to CODE, and protects the business's relationship with the client from the autism-spectrum tendencies of Joe Average Code Monkey.

I agree with your manager, for the most part. Your developers are there to develop and your testers are there to test. Does that mean that one of your developers can't be a customer contact, a Product Owner, or a customer representative? Absolutely not. In fact, it might be helpful to have someone with a technical background by involved with interacting with the customer, especially when it comes to requirements engineering, feasibility discussions, and scheduling (after all, engineers are the best at scheduling engineering tasks).

Asserting that your client needs to be shielded from your developers is wrong, and perhaps even offensive. Your engineers should have the knowledge and skills that it takes to interact with clients (and all stakeholders to a project). It's just that they shouldn't all be called upon to do it.

His boss, the owner of the company, told him this morning that every single person in the company needs to think of themselves as an extension of the sales department, and needs to be listening all the time for upsell opportunities. To this end, he thinks clients ought to have more or less full access to developers directly, in part so that devs have the opportunity to hear sales opportunities.

The owner is absolutely wrong. People without sales or marketing education or training should not be doing the job of those departments. Might the engineers need to interact with sales and marketing? Absolutely yes. But it's not their job to sell software, it's their job to build software that meets the requirements of the stakeholders. If all of your developers are busy selling and marketing the software, who is designing, building, and testing it?

","4","","","","","2011-08-17 19:22:48","","","","1","","","","CC BY-SA 3.0" "101957","2","","101954","2011-08-18 06:27:47","","2","","

There are extrinsic motivations like you mentioned bonus, paycheck which helps to some extent in getting things done but a major chunk is depended on intrinsic motivations and which is where you need to focus bit more

  • Give a proper picture of the tasks at hand and allow them to individually execute them (instead of micro managing)
  • a healthy work environment (apart for great machines) whereby they can voice their opinion and have people who can hear them out
  • The technology you are using are of their interest and they love to work on them
  • appreciate their work
  • give space for their personal life

This could also be an interesting read -Joel Spolsky

","26119","","","","","2011-08-18 06:27:47","","","","2","","","","CC BY-SA 3.0" "101969","2","","101954","2011-08-18 07:30:03","","10","","

Money has been proven not to be a strong motivator, though too little money is a strong demotivator. Pay enough to take money off the table as an issue. Any more won't help, in fact it may hurt.

This video suggests that the most powerful motivator is autonomy and I have found that to be true. However, you can go too far. Developers like their code to be perfect and if you give them room to make it such, there will be a cost in terms of getting stuff done.

Peopleware is about one-third dedicated to the environment that ""thought-workers"" spend their day in, for good reason. Lots of natural light, lots of space, lots of freedom to arrange things the way they want. However, it does focus very strongly on silence and I think you can go too far with that too. As Uncle Bob says in Agile Software Development, Principles, Patterns, and Practices, an Agile team is vibrant and communicative. My theory is that, within sensible limits, a constant noise is fine; it's sudden noises that drag people out of the zone.

Two things that I've found to be very powerful motivators in my own experience are good tools and good teammates.

Anything which slows people down is a demotivator. Roy Osherove of 5whys talks a lot about this and suggests that every team leader should see their only role as ""Bottleneck Ninja"".

And developers love to learn, preferably from each other, preferably all day long. If you can get a team of good solid developers and put them in a room together, they'll do a lot of the motivating themselves.

Finally, respect. There is little more important to geeks. Understand that you are dealing with intelligent people and act accordingly. Don't force them into asinine team-building sessions and company picnics. Just treat them with respect, put the job in front of them and (as much as possible) let them go at it. Ask for visibility, by all means, but do not micromanage.

","12828","","12828","","2011-08-18 07:49:57","2011-08-18 07:49:57","","","","1","","","","CC BY-SA 3.0" "102215","2","","102205","2009-12-06 13:20:21","","339","","

This is an old answer.
See UTF-8 Everywhere for the latest updates.

Opinion: Yes, UTF-16 should be considered harmful. The very reason it exists is because some time ago there used to be a misguided belief that widechar is going to be what UCS-4 now is.

Despite the ""anglo-centrism"" of UTF-8, it should be considered the only useful encoding for text. One can argue that source codes of programs, web pages and XML files, OS file names and other computer-to-computer text interfaces should never have existed. But when they do, text is not only for human readers.

On the other hand, UTF-8 overhead is a small price to pay while it has significant advantages. Advantages such as compatibility with unaware code that just passes strings with char*. This is a great thing. There're few useful characters which are SHORTER in UTF-16 than they are in UTF-8.

I believe that all other encodings will die eventually. This involves that MS-Windows, Java, ICU, python stop using it as their favorite. After long research and discussions, the development conventions at my company ban using UTF-16 anywhere except OS API calls, and this despite importance of performance in our applications and the fact that we use Windows. Conversion functions were developed to convert always-assumed-UTF8 std::strings to native UTF-16, which Windows itself does not support properly.

To people who say ""use what needed where it is needed"", I say: there's a huge advantage to using the same encoding everywhere, and I see no sufficient reason to do otherwise. In particular, I think adding wchar_t to C++ was a mistake, and so are the Unicode additions to C++0x. What must be demanded from STL implementations though is that every std::string or char* parameter would be considered unicode-compatible.

I am also against the ""use what you want"" approach. I see no reason for such liberty. There's enough confusion on the subject of text, resulting in all this broken software. Having above said, I am convinced that programmers must finally reach consensus on UTF-8 as one proper way. (I come from a non-ascii-speaking country and grew up on Windows, so I'd be last expected to attack UTF-16 based on religious grounds).

I'd like to share more information on how I do text on Windows, and what I recommend to everyone else for compile-time checked unicode correctness, ease of use and better multi-platformness of the code. The suggestion substantially differs from what is usually recommended as the proper way of using Unicode on windows. Yet, in depth research of these recommendations resulted in the same conclusion. So here goes:

  • Do not use wchar_t or std::wstring in any place other than adjacent point to APIs accepting UTF-16.
  • Don't use _T("""") or L"""" UTF-16 literals (These should IMO be taken out of the standard, as a part of UTF-16 deprecation).
  • Don't use types, functions or their derivatives that are sensitive to the _UNICODE constant, such as LPTSTR or CreateWindow().
  • Yet, _UNICODE always defined, to avoid passing char* strings to WinAPI getting silently compiled
  • std::strings and char* anywhere in program are considered UTF-8 (if not said otherwise)
  • All my strings are std::string, though you can pass char* or string literal to convert(const std::string &).
  • only use Win32 functions that accept widechars (LPWSTR). Never those which accept LPTSTR or LPSTR. Pass parameters this way:

    ::SetWindowTextW(Utils::convert(someStdString or ""string litteral"").c_str())
    

    (The policy uses conversion functions below.)

  • With MFC strings:

    CString someoneElse; // something that arrived from MFC. Converted as soon as possible, before passing any further away from the API call:
    
    std::string s = str(boost::format(""Hello %s\n"") % Convert(someoneElse));
    AfxMessageBox(MfcUtils::Convert(s), _T(""Error""), MB_OK);
    
  • Working with files, filenames and fstream on Windows:

    • Never pass std::string or const char* filename arguments to fstream family. MSVC STL does not support UTF-8 arguments, but has a non-standard extension which should be used as follows:
    • Convert std::string arguments to std::wstring with Utils::Convert:

      std::ifstream ifs(Utils::Convert(""hello""),
                        std::ios_base::in |
                        std::ios_base::binary);
      

      We'll have to manually remove the convert, when MSVC's attitude to fstream changes.

    • This code is not multi-platform and may have to be changed manually in the future
    • See fstream unicode research/discussion case 4215 for more info.
    • Never produce text output files with non-UTF8 content
    • Avoid using fopen() for RAII/OOD reasons. If necessary, use _wfopen() and WinAPI conventions above.

// For interface to win32 API functions
std::string convert(const std::wstring& str, unsigned int codePage /*= CP_UTF8*/)
{
    // Ask me for implementation..
    ...
}

std::wstring convert(const std::string& str, unsigned int codePage /*= CP_UTF8*/)
{
    // Ask me for implementation..
    ...
}

// Interface to MFC
std::string convert(const CString &mfcString)
{
#ifdef UNICODE
    return Utils::convert(std::wstring(mfcString.GetString()));
#else
    return mfcString.GetString();   // This branch is deprecated.
#endif
}

CString convert(const std::string &s)
{
#ifdef UNICODE
    return CString(Utils::convert(s).c_str());
#else
    Exceptions::Assert(false, ""Unicode policy violation. See W569""); // This branch is deprecated as it does not support unicode
    return s.c_str();   
#endif
}
","35174","Pavel Radzivilovsky","-1","","2014-10-08 17:48:52","2014-05-05 13:28:02","","","","52","","","2011-08-18 21:32:55","CC BY-SA 3.0" "102395","2","","102381","2011-08-19 15:36:07","","9","","

Thinking it out loud looks like a safer bet to me.

<thinking out loud>

so, what are my options here?

  1. think it out loud and get to right solution. Nice at all counts. Score: 2

  2. think it out loud and get to wrong solution or get it right with help from interviewer. Bad, but at least gives me a chance that interviewer will appreciate thought process I exposed while getting there. Score: 0.5

  3. silently get to the right solution. Pretty good, though there's a risk that interviewer will doubt my teamwork abilities. Score: 1.5

  4. silently get to the wrong solution. Total disaster: not only I failed but also gave a big fat chance for interviewer to think I'm dumb. Score: -1


Count: thinking out loud wins over silence 2.5 : 0.5.

</thinking out loud>

","31260","","","","","2011-08-19 15:36:07","","","","0","","","","CC BY-SA 3.0" "103353","2","","103280","2011-08-24 14:23:41","","3","","

OK as a lead it is your job to get the projects out the door. So you have to be the one who enforces standards, code reviews, asks for progress reports and all those things when the developers would rather you left them alone. These things are just requirements of management and except for the code reviews don't really grow the employees' skills.

However, you want to help them grow which is a great attribute in a leader.

Code reviews are certainly a first step, they will help you see who has less than stellar skils and needs improvement to even have satsifactory performance. They willl help the developers see other ways to do things and to understand different parts of the code base than the ones they personally worked on. In my opinion, code reviews are best done in person in a conference room with the developer and the reviewer (who should be another developer when possible not always the lead, reviewing other's code is also a skill that needs to be developed) and you as a moderator. You should keep notes on what needed to be changed to identify trends. What you really are looking for isn't mistakes or changes (everyone's code can be improved), but consistent failure to learn from mistakes. Do not tell upper management you are keeping these notes or you will find yourself forced to use them as measurements in the performance review process which frankly defeats the purpose. If several developers are making the same mistakes, a training session or a wiki entry on how to do X may be in order.

Now on to growing vice getting to the minimal level. First, you need to know what skill sets the developers have and what skill sets it would be useful that they had and what they might be interested in getting knowldge in. You need to talk to them and review their resumes and understand what they liek and don't like to do.

Don't give all the interesting assignments only to the most skilled. That doesn't help the others get up to speed on new problems and technologies. You can't move from being the most junior guy getting only the smallest and least important tasks to the senior guy unless someone takes a chance and assigns more difficult work to you. That said, the less experienced may need to be assigned first to pair program with a senior to get more advanced skills. Including the juniors in code reviews will also expose them to more advanced techniques.

First give them a chance to figure out the issue themselves. But sometimes people are stuck and don't know where to start (a skill that you also needs developing especially in new programmers) or what to do to solve a problem.

If you give them a couple of days to research something and they still don't have a direction for how they are going to do something, then you may need to intervene with some suggestions. If you are technical yourself, you may give them some ideas for how to solve the problem. If not, a meeting with several people where you brainstorm ideas can help if the person is stuck. Or asking a more experienced person to give some suggestions. What you don't want to do is take the problem away from them and solve it yourself. But you have to balance getting the project done with the programmer's ego and at times you need to send them in a specific direction. If he has a bad solution and it needs to be fixed, the worst thing you can do is give it to someone else unless you intend to fire the programmer. Making people fix their own mistakes is how they learn not to make them.

I've seen bad programmers coddled, where someone else has to fix almost everything they do. The other programmers resent this and just want the person out of their lives. Coddling a bad programmer leads to the good programmers leaving. You have to find the line between coddling and devloping skills. If you give someone several chances and he or she never gets better, then cut him or her loose.

For the seniors who are already competent in their current skill sets, things are easier. Usually you just need to give them the opportunity to do something new and they jump in and learn it. Just make sure the interesting opportunities get spread around and don't all go to Joe the Wonder Programmer who can fix anything. You want to end up with ten Joes not just one.

Another way to develop skills is to have a weekly 1-hour training session. Make each devloper responsible for a particular topic. This will help them get better at communicating, will make them research something in depth and will give everyone the benefit of their research. Some topics should be assigned to people who are not familair with the topic to force them to grow some knowledge in that are and some should be assigned to people you know are the local experts on that topic. Topics should be a combination fo things you need people to be good at inthe near furture or right now and some coverage of new upcoming technologies that you don't use right now but peoplea re intersted in learning about to see if they could be useful. But everyone including the most junior must be assigned a topic. Doing the training is as much a growth opportunity as listening to the training.

Depending on how your developers' time is billed (this is harder in a customer billing situation), it is usually worth it for developers to have 4-8 hours a week to work on personal projects. They will be excited to do this. The best people will want to work there and they will learn alot that will become useful for the future. It's hard for the bean counters to understand the need for this, but this time will be paid back many times over in employee satisfaction, new features or software that nobody required (or which will help automate some of the drudgery) and faster development due to new techniques learned. Some developers will use this time strictly for personal projects not related to what you do (and that's good, they will still be gaining skills and happy for the opportunity), but many others will use it to solve persistent problmes that, due to the nature of how projects are managed, ndbody had time to fix beforehand. So you may get refactorings that benefit everyone; some people might write tests to improve test coverage to make it easier to refactor; some others might explore some new features that might make your software more useful to it's customers. In general, if you can persuade the bean counters, there is no way to lose by allowing them this freedom.

You have to learn how to balance letting people have some stretch for their skills and keeping the project on track. The less experienced the developer is, the more someone needs to check on progress especially in the early stages when changing direction is easier. The inexperienced may struggle and be afraid to speak up. These people tend to leave just before launch and you find their part of the project isn't anywhere close to being done. Be especially careful to check progress on anyone you have who has changed jobs frequently (unless they were a contractor as that is the nature of contracting).

The more experienced can generally be trusted to tell you when they are having trouble finding the solution and need some assistance from someone with more knowledge in the area or they will go seek out that person and get the knowledge transfer. So they don't need to be monitored as closely in the intial phases of learning a new skill set for a project. They will find a way to deliver the project. Those who have a track record of delivering can usually be left alone except for minimal progress reports (you usually have to report to your management too and thus need some information).

","1093","","","","","2011-08-24 14:23:41","","","","1","","","","CC BY-SA 3.0" "393569","2","","393544","2019-06-19 16:21:25","","5","","

I'm still not sure whether to use exceptions or guard clauses for dealing with invalid arguments.

Guard clauses and exceptions aren't mutually-exclusive. Failing the condition in a guard clause is often grounds for throwing an exception. In a lot of ways, something like guard_against_none(value) is a shorthand for ""throw something if value is None.""

It seems that Python is more lax with exceptions and in fact encourages them to be used for control flow. E.g., the StopIterationError. Does this extend to using exceptions to handle argument validation?

Don't get laxity and a different paradigm confused.

A number of languages people tend to learn early these days impose a significant penalty for using exceptions. That fosters a general ""exceptions considered harmful,"" mentality that sticks because there's no context for it. Nobody bothers to tell noobs that it's a limitation in the language and not with exceptions in general.

Python is not one of those languages. Exceptions are cheap and efficient enough that using them for control flow is just fine and can make for easier-to-read code.

To me, getting an invalid argument is not ""exceptional"".

This code would beg to disagree:

def divide(dividend, divisor):
    """"""Do a division.  The divisor shouldn't be zero.""""""
    return dividend / divisor

print divide(12345, 0)

Even if you don't check divisor, the divide operation in the returned expression is going to raise a ZeroDivisionError for you. That says that a division by zero happened, but it doesn't give the caller any hint whether it was because of a bug in the function or because the divisor passed into it wasn't valid according to its documentation.

It is actually expected to happen about 10% of the time. ... Should it therefore fail more loudly?

Yes. No. Maybe. That's a decision you have to make based on your circumstances.

This specific case turns the invalid into the valid. The guard clauses as you wrote them don't really guard against anything. They codify an undocumented semantic that says the function will return silently without doing anything if given invalid input. This needs to be part of the function documentation so anyone calling the function understands the potential pitfall.

If your program contains a mix of cases, you need different functions that provide different behaviors:

def add_document(data, country):
    """"""Add a document.""""""
    if <invalid-argument-condition(s)>:
        raise ValueError(""Invalid argument(s)."")
    # Add the document

def quietly_add_document(data, country):
    """"""Add a document, ignoring invalid input by doing nothing.""""""
    try:
        add_document(data, country)
    except ValueError:
        pass  # Don't care if arguments were bogus.

Does Python Zen ""Errors should never pass silently."" apply here?

I'm not one for ""never x""-type dicta; in this case I'd say it does apply in an ""errors should never pass silently except where they should"" sort of way.

Or should exceptions still be reserved for cases where either they have to be handled, or the software must be made to terminate?

Handling and termination are the only possible outcomes of an exception. Termination is there for cases where you know something wrong and none of the callers all the way up to the main program knows how to deal with it.

","20756","","","","","2019-06-19 16:21:25","","","","0","","","","CC BY-SA 4.0" "103506","2","","103501","2011-08-25 04:00:44","","19","","

That was one of my ideas in the past: having a high performance server which has all the required software, and a bunch of low performance desktop PCs which would be used only to connect to the server through Remote Desktop.

The benefits would be:

  • The solid backup. Some developers may not want to backup their desktop computers regularly, so a central solution would be more reliable,
  • The possibility, for every developer, to work from anywhere. By this I also mean working from any PC in the company. Let's say in the morning, the developer wants silent work conditions. He goes to his own room and works there. Then he wants to do some pair programming or to work in a more social environment. He just shuts down his desktop PC, goes to another room where there are ten computers, and connects from there. No ""I must reload all my apps again"".

Well, there are several serious problems with that, making me think that I will never use the thing like this the next years.

  • Specificity of remote solutions. What about working distantly using several computer screens at once? I mean, is it easy? Is it obvious? Are shortcuts I use daily enabled when working distantly? I'm not so sure. What if I press Ctrl+Shift+Esc to see the list of programs currently running? Oh yes, it doesn't work, so now I must remember doing it in a different way.

  • Performance hit. I'm not sure there will be no performance decrease at all. And remember, a programmer who uses a slow computer is an unhappy programmer. And the company who makes their programmers unhappy with crappy conditions will never produce high quality software.

  • Higher impact of a disaster. Will you host the solution on a redundant server? Do you have redundant network in your company? Let's say the router goes down, and is not redundant. It means that all the developers are now unable to work. At all. Because they don't have software installed locally. Because they don't even have source code: it's on the server. So everyone stops, and you're paying all those people per hour just to wait the router to be replaced.

  • Hardware costs. If it's one and one only server, how much will it cost? If you have, let's say, twenty developers, would 64 GB of RAM on the server be enough? Not so sure. Would quad-core solution with two CPU be enough? Again, I have some doubts. Otherwise, what do you think about? Some sort of cloud? Or do you have a scalable solution which works on several servers? Are you ready to pay the cost of Windows Server (if you use Windows) per machine?

  • Electricity cost. If you work completely remotely, it means that you spend nearly the same amount of power server-side as if you were working locally, plus the amount of power wasted by the local machine and the network.

  • Licenses. I'm not sure if I must put it as a benefit or a problem, but I feel like the cost of software licensing in this case will be much higher.

And again, think about all the costs of management, support, deployment, maintenance. With a custom solution like this, it may easily become huge, not counting that every time something will fail, you'll see every developer NOPing around, waiting to be able to continue his work.

","6605","","","","","2011-08-25 04:00:44","","","","9","","","2012-10-28 20:12:45","CC BY-SA 3.0" "291136","2","","291130","2015-07-29 06:53:10","","5","","

It's great that you're taking initiative to introduce new things to a startup - but if your founders have been through the enterprise experience sometime in their lives, then they'd be wary of any 'processes' that have a tendency to introduce bureaucracy in the company. A simple code review tool can gradually evolve into a mandatory pre-commit review process that is enforced by git hooks!

You need to be shrewd enough to not imply any such thing and steer away from any discussions that head in that way - because, well, everybody hates bureaucracy!

Here's the technique that I had once used:

  1. How do people look at each other's code right now?

    In my case, people used to email each other patches (so we kind of did have an informal review process already) - and then you could see people either walking over to each other's desk or writing poetic descriptions to locate the part of code they want to comment on

    In class SomeWeirdClassName, function fooButNotJustFoo() should return a SomeStructInADifferentHeader instead of an int!

    You can now point at such instances and say, ""Hey, this is broken! We can do it in a better way!"" and then go on to talk about how a code review tool allows you to add inline comments directly on a particular line of the patch.

  2. Start with a small group, maybe your own team (you can coerce, ahem, convince them over lunch) and ask them to evangelise it with you - talk about how awesome things are in code review land during an all-hands meeting.

  3. If you have an admin guy, then get him drunk on a Friday night and silently add a couple of e-mail aliases to the CC list of all reviews. On Monday, quite a few people would get the code review mails, with contextual comments, live links to the patch and what-not; by the time someone realizes what's going on and removes those aliases from the CC list. But your word is out! Now everyone is talking about ""those weird mails that ended up in their inbox by mistake"" - the perfect time put on your evangelist hat!

  4. If you prefer to talk directly to your boss, then make sure you highlight the fringe benefits of using the code review tool -

    a) The emails ensure that everyone knows what every other developer is working on

    b) If some developer decides to call in sick on release day, then you don't have to hack into this computer to get what he was working on - you can just download the patch from the code review tool and check it in yourself

    c) Frequently putting others' code into everyone's faces embodies a sense of the prevalent coding culture and prompts everyone to get on the same boat, as opposed to religiously following their own coding style

Lastly, since you've already introduced git (successfully) and people are happy to use it, you already have some street cred riding on you - bank on it to push this new amazing thing that's going to change everyone's life (for the better)!

","108426","","8669","","2015-07-29 10:39:08","2015-07-29 10:39:08","","","","1","","","","CC BY-SA 3.0" "394609","2","","394551","2019-07-12 15:46:30","","4","","

Capturing requirements is an essential part of any (successful) software project. But writing a requirements specification isn't.

  • A documentation-centric approach can end up like a game of Chinese Whispers: a subject matter expert voices a requirement, an analyst writes it down, a dev tries to write something that meet's the analyst's description, the end user is confused because the software doesn't solve their problem.

    Agile techniques suggests that developers should gather requirements directly from the subject matter experts, usually the end users. There are a variety of techniques to do this, for example by talking through an example scenario with the SME. Devs are good at spotting edge cases and asking the SME to clarify how the software should behave in that edge case.

  • Instead of gathering all requirements up front (and thus risking large misunderstandings), agile teams will likely start with a small slice of requirements, build a prototype, and use that to gather feedback for the next iteration.

  • As the understanding of the requirements shifts over time, a static requirements specification will fall out of date. How can this be prevented?

    By expressing requirements as runnable tests. It turns out that “readable specification” and “runnable tests” are not exclusive concepts, but can end up being one and the same document. E.g. Cucumber and other ideas out of the BDD space can be very helpful here.

In the case where you are rewriting an old system, the original system can be a great source of requirements. But which aspects are relevant? Are its niche features even being used? Which bugs must be reimplemented for compatibility? There's usually no workaround for talking with the end users.

Having a working system lying around can also be very helpful for testing the new software, but that is unrelated to any agile-ish concerns.

Note that fixed-scope, fixed-time projects with looming deadlines are the antithesis of agile. The normal agile approach is to pick a sliver of functionality and first deliver that, then iterate. The most important stuff gets done first, less important stuff later (or never). If everything is important and MUST be done ASAP, then nothing is important and the project is unlikely to deliver anything.

In your situation, the lack of requirements is not an agile feature but more an average case of organizational dysfunction. If you want the project to succeed, you will need to find a way to cut through this dysfunction. E.g. urge the business owner not to write a complete requirements document, but try to set up a meeting where they explain their requirements for the most important feature. You can look at the old system for details. Then implement that feature, and iterate.

","60357","","","","","2019-07-12 15:46:30","","","","0","","","","CC BY-SA 4.0" "188418","1","188419","","2013-02-26 07:32:56","","5","431","

As a Lead Programmer my responsibilities include calculating the deadlines of the projects.

To do this I have discussion with related team players and calculate a deadline estimation. Sometimes, I got a scared voice from CTO saying that the estimated deadline is too much. Then I have to shorten the deadline. With a shorter deadline, programmers have to work with extra pressure.

How can I represent (points) to the CTO that the calculated deadline is reasonable?

","63715","","56691","","2013-02-26 08:50:37","2013-02-26 14:06:27","How to represent calculated deadline is reasonable?","","4","1","1","","","CC BY-SA 3.0" "298159","2","","298145","2015-09-24 14:39:51","","3","","

There are differing opinions voiced, but I doubt people fundamentally disagree.

The point is that wrapping APIs can be best practice.

  1. If the interface of the API is trivial, then there's no point in wrapping. Same if the interface is stable, doesn't get in the way of testing, and isn't much more complicated than what you need.
  2. If the interface is complicated and you only use a subset, use a wrapper. For example, an advanced logging framework where you always want to log things in the same format and all you ever use in the app is LogError, LogWarn, LogInfo, and LogDebug. Using a wrapper allows using a mock for testing log messages, and also allows to easily change the implementation
  3. If the interface is complicated because it has to be, and you use all of it, you won't be able to improve the interface and the wrapper will be as complicated as the original interface. That means a wrapper won't offer a benefit when moving to a different library.
","169135","","","","","2015-09-24 14:39:51","","","","1","","","","CC BY-SA 3.0" "189066","1","189069","","2013-03-03 12:27:18","","10","985","

Our teams is having the following discussion:

Let's say we have the following two methods:

public Response Withdraw(int clubId, int terminalId,int cardId, string invoice, decimal amount);

public Response Withdraw(Club club, Terminal terminal,Card card, string invoice, decimal amount);

whats sent over-the-wire are just the ids.

one side says that the first method is correct, because we only have the ids of terminal and club, and it should be clear that we have nothing else, this is my approach.

the other side says that the second method is correct because it's more flexible.

We are familiar with object parameter idea, the other side also thinks that the object parameter should have the objects as properties.

Which is the correct approach?

Maybe there is a third even better approach?

","39810","","","","","2018-09-11 22:02:26","Should a method's parameter list contain objects or object identifiers?","","3","2","7","","","CC BY-SA 3.0" "189068","2","","189066","2013-03-03 12:42:12","","13","","

The first approach is indicative of Primitive Obsession. Because you are passing ints and strings around, it's very easy for the programmer to make a mistake (eg passing a clubId to the terminalId parameter). This will result in difficult to find bugs.

In the second example, it's impossible to pass a club when a terminal is expected - this would give you a compile time error.

Even so, I would still look at string invoice. Is an invoice really a string? What does amount mean? This is more likely a monetary value.

You mentioned in your question ""whats sent over-the-wire are just the ids."". This is correct, but don't let this requirement muddy your domain.

The best explanation I've seen in favour of this approach was in rule 3 of Object Calisthenics:

An int on its own is just a scalar, so it has no meaning. When a method takes an int as a parameter, the method name needs to do all of the work of expressing the intent. If the same method takes an Hour as a parameter, it’s much easier to see what’s going on. Small objects like this can make programs more maintainable, since it isn’t possible to pass a Year to a method that takes an Hour parameter. With a primitive variable the compiler can’t help you write semantically correct programs. With an object, even a small one, you are giving both the compiler and the programmer additional info about what the value is and why it is being used.

","34614","","34614","","2013-03-03 13:18:01","2013-03-03 13:18:01","","","","6","","","","CC BY-SA 3.0" "399433","2","","399424","2019-10-07 19:58:56","","12","","

Considering that this is meant for a chemical lab and that your application does not control the instruments directly but rather through other services:

Force termination after showing the message. After an unhandled exception your application is in an unknown state. It could send erroneous commands. It can even invoke nasal demons. An erroneous command could potentially waste expensive reagents or bring danger to equipment or people.

But you can do something else: gracefully recover after restarting. I assume that your application doesn't bring down those background services with itself when it crashes. In that case you can easily recover the state from them. Or, if you have more state, consider saving it. In a storage which has provisions for data atomicity and integrity (SQLite maybe?).

Edit:

As stated in the comments, the process you control may require changes fast enough that the user won't have time to react. In that case you should consider silently restarting the app in addition to graceful state recovery.

","208524","","208524","","2019-10-10 08:03:23","2019-10-10 08:03:23","","","","3","","","","CC BY-SA 4.0" "298492","1","","","2015-09-28 22:01:07","","1","87","

This is the second project I'm working on which will use a design that I'm not 100% confident about. I'd like some feedback, maybe recommendations for a better design, or verification that this would be acceptable by more proficient programmers:

I have a device that I need to interface with. Be it via TCP, UART, UDP, or whatever. I need to send this device data/control, and receive data/control information from this device, but it's not exactly a request/response sort of situation. The device can send data at random times without user stimulus. For instance, the newest device is a full-duplex radio. When the radio correctly demodulates and decodes digital packets, he will send the voice data via UDP to my library. It doesn't wait for a user for input. When you want to send stuff out over the air, I send control information, and then data for the radio to modulate and push over the air.

Packets that go between my library and the device are framed: ie, each packet is prepended a header (and other metadata to identify the type of packet) and appended a CRC and footer so I know a) the start of a packet in case garbage is somehow introduced over the stream, and b) the data is correct (by verifying the footer, and the correct CRC).

Okay, so I'm designing a library that will need to be handed to a different developer for application development. So all the internals are abstracted away from him. For ease in this example, I want this library to have 4 calls: send_data, receive_data, send_voice, receive_voice. The different packet types (voice and data) are sent over one port. The library can frame and parse the appropriate packets (for sending and receiving, respectively), and so can the device.

For sending data to the device:
This is easy.
The application developer calls either send_data or send_voice. I take relevant voice or data, frame it accordingly, and send it to the device. The device parses the data, and sends it on its way. The calls are thread safe, so the developer can't call a UDP send asynchronously.

For receiving data from the device:
This is where I'm not 100% confident in the design.
I start a receive (RX) thread in my library. When I get a packet (either voice or data), I de-frame the packet, and put either the voice or data on a respective queue (a voice queue and a data queue).
When the application developer wants voice, he calls receive_voice and it will return a pop from the voice queue. When the application developer wants data, he calls receive_data and it will return a pop from the data queue. Obviously I'm careful the queues can't grow to infinity, and when a queue is MAX_QUEUE_LEN, I pop a voice/data packet off the queue and discard it before putting a new one on, and set a 'overrun' flag. This flag can be set/read with set_X_overrun, or get_X_overrun, where X is voice or data depending on the queue.

I actually like the design. However, what do other people think? I'm not sure what people think about putting a running thread inside of a library? I guess the other option is just to have read_data and read_voice calls that will block until a packet is found that matches what they want (voice or data). This eliminates the need for a running thread, but then the opposite packet is essentially disregarded while we wait for the one of choice. Or is there another design I'm not even thinking about?

Sorry about the long-winded and probably unclear design question. Now that I feel like more of a confident programmer using the semantics/syntax of languages themselves, I've been getting much more interested in the design of programs rather than the nitty-gritty of the lower level calls. I'm striving for the best/coolest design possible rather than something that 'just works.'

Bonus points if there's some sort of open source software out there that I can look into and improve my design.

Image of design:

]

Pseudo code:

public void send_data(buf) {
    frame_data_packet(buf);
    udp_send(buf);
}

public void send_voice(buf) {
    frame_voice_packet(buf);
    udp_send_(buf);
}

public void receive_data(buf) {
    buf = data_queue.pop();
}

public void receive_voice(buf) {
    buf = voice_queue.pop();
}

/* RX Thread */
public void run_rx()
{
    while (run_rx_f) {
        type = deframe_data(data);
        switch (type) {
            case VOICE:
                voice_queue.add(data);
                break;
            case DATA:
                data_queue.add(data);
                break;
        }
    }
}
","167603","","167603","","2015-09-28 22:08:53","2015-10-02 01:20:41","Threads Inside Full Duplex Device Library","","1","0","","","","CC BY-SA 3.0" "191309","1","191311","","2013-03-20 16:30:27","","15","1447","

I've just taken on a new job at a college as (the sole) Web application developer.

The college has a number of disparate but all pretty badly coded legacy systems. Mostly built in PHP they deal with things like attendance, exam results, marking etc.

My first job is to build a system that incorporates a lot of this data, which is currently resting in various databases without any kind of friendly API to pull it out (the existing systems are coded in vanilla PHP with no separation of data and view) with a new platform for recording pastoral information about students and presents it to tutors and senior staff in a useful manner so they can react to issues with students quickly.

In our first meeting, there were 18 people! There was no clear leader or voice that represented the majority. No identifiable client. The meeting swung from detailed implementation ideas on minor features from heads of faculty to arguments about whether we should use Excel spreadsheets or not for data input!

As you can imagine my head was spinning at the end. I actually had a lot of good ideas but I couldn't get them heard. This is a very new role for me, before I was part of a development team in a marketing agency. We had very well defined roles: Project Manager, Client, Designer, Developer.

I'd like to know if any seasoned developers or managers out their can give me some pointers on how I can whip my colleagues up into something that resembles a project team. Is agile the way to go? How would you approach handling all the disparate voices? It's clear that some process needs to be put in place very quickly, I'm just not sure what that is.

","59642","","59642","","2013-03-20 17:40:49","2013-03-27 14:22:24","How to start a development project when there are too many potential stakeholders","","5","7","1","","","CC BY-SA 3.0" "401243","1","401245","","2019-11-18 14:17:25","","1","623","

In an order management system for example, what's the best way to plot the states of different objects that overlap and interconnect at different states?

Example:

  1. Order object will have states like Draft, Confirmed, Placed, Picked, Delivered...etc.
  2. Invoice object will have states like Opened, Paid, Closed...etc.

Knowing that an invoice will be opened immediately when an order is ""Confirmed"", but the order cannot be ""Placed"" until the invoice is in state ""Paid"", the invoice from there will be closed by finance team upon checking and validating the amount, during that time, the order will proceed with its normal states without invoice being affected.

What's the proper way to plot this? One order might have multiple invoices, putting the invoice states as part of order lifecycle won't work, they have to be separate objects with separate states.

","39441","","","","","2019-11-18 14:59:59","How to plot a state machine diagram of multiple related objects in one diagram?","","1","0","","","","CC BY-SA 4.0" "192020","2","","111633","2013-03-26 17:20:25","","4","","

Subversion and Git both encourage particular (and very different) approaches to development and collaboration. Some organizations will get more out of Git, and others will get more out of Subversion, depending on their organization and culture.

Git and Mercurial are both excellent for distributed and loosely organized teams of highly competent professional programmers. Both of these popular DVCS tools encourage small repositories, with reuse between developers taking place via published libraries with (relatively) stable interfaces.

Subversion, on the other hand, encourages a more centralized and tightly coupled management structure, with more communication between developers and a greater degree of organizational control over day-to-day development activities. Within these more compactly organized teams, reuse between developers tends to take place via unpublished libraries with (relatively) unstable interfaces. TortoiseSVN also permits Subversion to support multidisciplinary teams with members who are not professional programmers (e.g. systems engineers, algorithms engineers or other subject-area specialists).

If your team is distributed, with members either working from home or from many different international sites, or if they prefer to work alone and in silence, with little face-to-face communication, then a DVCS like Git or Mercurial will be a good cultural fit.

If, on the other hand, your team is located on a single site, with an active ""team"" approach to development, and lots of face-to-face communication, with a ""buzz"" in the air, then SVN may be a better cultural fit, particularly if you have lots of cross-disciplinary teams.

Of course, it is possible to configure Git and Hg (powerful and flexible as they are) to do pretty much whatever you want, but it is definitely more work, and they are definitely harder to use, particularly for those members of the team who would not naturally be inclined to use any form of version control whatsoever.

Finally, I also find that sharing functionality using ""hot"" library development under Svn (with CI & a test-driven approach) permits a pace of coordinated development that is difficult to achieve with a more distributed and loosely-coupled team.

","20548","","","","","2013-03-26 17:20:25","","","","0","","","2013-03-26 17:20:25","CC BY-SA 3.0" "301264","1","","","2015-10-29 23:44:57","","2","149","

I have a communication class that incapsulates all the apis of a remote application. This class happens to be a delegate class because it is interchangeable with another one (the remote application has two different channels).

This class has a number of topics (eg: acknowledge, people, invoices, duedates, payment methods, queues), and each topic has a number of methods (setOne, getOne, getAll, doSomething...). The number of topics is raising rapidly recently, from 1-2 to 6 and possibly more in the future.

Considering that the class itself needs some scaffolding methods (configuration, startup, authentication...) and each topic needs some auxiliary protected method, the total projected number of method is rasing beyond my confort zone.

If this class wasn't itself a delegate class, I would have implemented this solution: a class for each topic, each making use of the common class, and if necessary of the others. But the client code wants to be given just ONE class to which make requests.

One option is to make a single delegate class with lots of relay methods to composite objects. But I don't feel this is very far from swallowing the frog and just have a big class full of stuff.

Are there other solutions?

I'm working in php and I colud leverage the ___CALL meta method: class accepts any message and then looks up (via reflection) if it has an auxiliary class to relay the calling. This could be smart, but I would leave this solution for emergencies.

","202164","","31260","","2015-10-30 04:38:06","2015-10-30 14:59:16","how to split up a delegate class","","1","4","1","","","CC BY-SA 3.0" "402048","2","","401780","2019-12-04 13:11:52","","1","","

However, what bothers me is the amount of work required to make a small change in a model,

In other words, it gets unmaintainable. Don't get me wrong, I'm not saying you are doing it wrong, in fact I think you are doing it correctly, and in the process you uncovered a real problem, and that is, that this architecture just a bad idea to begin with.

It gets actually much worse than what you are experiencing now, when there are overlapping use-cases, flags that change some attribute's meaning or interrelated data attributes are changed, updated or introduced. It works only on small projects where everything fits in your/teams head.

There are alternative (i.e. better, more maintainable) designs. Read up on object-orientation more, and try not to make everything ""pure"" just for the sake of it, be a lot more pragmatical. Seems to me that little voice in your head is saying the right things :)

","232369","","","","","2019-12-04 13:11:52","","","","1","","","","CC BY-SA 4.0" "193804","2","","193802","2013-04-03 18:10:15","","4","","

Not that this will solve the whole situation, but you might try adding more comments to your source code.

  1. If code is not complete, it could be marked as such.
  2. If the purpose of a block of code is not self-documenting, then you should document it.

All and all, try to make lemonade instead wasting time sucking on lemons. As Michael said, in general, teammates aren't out to make you look bad. Try to learn from your mistakes and apply them to future revisions.

If you believe that his changes are having a negative impact, please voice this (diplomatically). If it were me, I would simply ask why specific changes were done and see if you can defend your original changes. Your senior co-workers are human too. It's quite possible that he missed something and/or is unaware of any negative impact he is providing.

","32448","user606723","","","","2013-04-03 18:10:15","","","","7","","","","CC BY-SA 3.0" "194187","2","","194094","2013-04-07 11:59:47","","7","","

I'm sensitive to the notion that you feel powerless to change the environment, but I think this situation is a serious challenge to your professionalism as a programmer.

What would happen if a surgeon stored dirty scalpels with the clean ones? What would happen if a plumber didn't test the pipes he installed? What would happen if a violinist always played out of tune with the orchestra?

Why should programmers be exempt from teamwork and common courtesy? As a member of that team, you share responsibility for the result. It's irresponsible to ignore the team as it sabotages it's own efforts.

I'm curious where the rule that ""no one is allowed to check-in while to build is red"" comes from? It's a good rule if that then focuses everyone on fixing the build. I would try to lead by example and help fix the build whenever it's broken. Let's face it, you're not getting anything else done anyway. If this annoys you, then I would suggest voicing those concerns. The voice of someone actively trying to improve a situation has more influence than the guy who grumbles in the corner.

","3764","","","","","2013-04-07 11:59:47","","","","0","","","","CC BY-SA 3.0" "402921","2","","402918","2019-12-25 05:15:44","","3","","

To begin with, You might want to think of security and lock/restraint as two different scenarios, the challenges involved in solving them are of different levels too, although the underlying principle remains the same.

Solving for restraint or accidentally issuing commands is much easier. It is as simple as taking a double confirmation like: Are you sure you want to book a cab now? - which is easier even for a child to crack. Or setup questions, the answers to which a lot of people might know with a simple browse of your Social Media Account.

Solving for security though, is a different challenge, but again has levels of complexity. However, you might notice that it all comes down to the complexity of the algorithm used in Challenge-response authentication. The level of sophistication can be increased by using many voice recognition algorithms (for Alexa or Home) as well on which a lot of study has and is being done.

","350103","","","","","2019-12-25 05:15:44","","","","2","","","","CC BY-SA 4.0" "402962","2","","402939","2019-12-26 11:35:29","","1","","

To me, this is important because you can release changes in small batches and manage integration regressions much more easily.

With a monolith, you make a change to some feature area, build and deploy the entire thing and hope it goes well. With independent services, it's much easier to roll out small chunks and make sure they continue to play well.

Let's say you have Service A which handles something, and v1 is currently in production. All consumers use the v1 API. You introduce some updated functionality under a v2 API. You maintain backward compatibility on the v1 API and can 'silently' release the v2 API, and none of your consuming services need know or care. Service A is now supporting v1 and v2 APIs, and your other consumers can then start migrating to S1.v2 calls.

With complex monoliths, this is a very difficult and dangerous process, especially when you're running multiple different applications on a shared codebase in an enterprise setting. While microservices required considerably more discipline in many areas, it allows you to make your 'big system changes' in small pieces and without all of the external factors that make complex monolith updates a real bear.

We can somewhat liken it to refactoring code... when we refactor code, we tend to do so in small pieces at a time, check to make sure they work, then package it all up. I tend to find a practical use of microservices is similar, but on the component level instead: we can make small updates to various components without breaking anything, and when everything is surely working well, the old APIs and their implementations can be retired.

Aside from the oft-touted advantages of requiring no outside knowledge, the service team completely owning the service, no cross-service dependencies, etc., this ability to do complex releases in small stages due to the independence is a significant practical advantage of working with microservice architectures.

","204829","","","","","2019-12-26 11:35:29","","","","3","","","","CC BY-SA 4.0" "403130","2","","403128","2019-12-31 07:50:36","","4","","

This is not an authorization problem; it is an accounting problem.

The subscription is for your company's product. These transactions are sales. This is your core business.

I would do a deep dive with the product owners to uncover potential use cases for:

  • Customer onboarding
  • Rate limiting (your question)
  • Invoicing and auditing
  • Customer support
  • Refunds or credits
  • Fraud detection

Also I would chat specifically with the marketing team on what sort of changes (bulk pricing, regionally-targeted pricing, tiers of service) they envision as they adapt to competitive markets. You don't want to be the stick in the mud who has to say, ""we can't support that"" down the road.

Once you have the uses cases (they can just be drafts) and some notion of the potential evolution of the product, you will be able to evaluate architectures in support of it. In many cases, the needs of the business require that you actually own this code so that you have complete control over it. This is your business' core competency, after all. In other cases it make more sense to conform to a standard accounting model or one that is used in the industry and compatible with other third party's object models (e.g. for MIS applications). There is no one right answer.

I would not tie this problem to an authorization model as that will tie your hands and affect your company's market agility. It is a business problem.

","115084","","","","","2019-12-31 07:50:36","","","","0","","","","CC BY-SA 4.0" "403420","2","","403318","2020-01-06 17:07:35","","3","","

There are actually two different types of exceptions that need to be handled differently.

First of all there are the ""Non-bonehead"" exceptions like ""File not found"". These exceptions are kind of expected, even if you check for the files existence beforehand it doesn't prove that it will be there when you go to use it.

These you typically catch as low/specific as possible, everyone understands this already because it's how everyone says to handle ""Exceptions"", so let's ignore it.

The other is unexpected exceptions. This covers both programming errors and some situational errors you aren't expecting to encounter (like the OS rips a disk out from under you). These are what you are calling ""Bonehead"" exceptions but they DO happen.

The important thing is that these are NOT ignored. If the only way to get a team to pay attention is to crash the app hard, then DO SO, but if you can get their attention another way, I recommend catching the most general ""Exception"" type just inside each primary thread loop and dealing with it in such a way as to get the teams attention, and then try to continue. You'd be surprised at how often this can keep your app running pretty much perfectly while you fix it. It's even allowed me to recover from an out of memory situation.

Not catching it is really not good compared to just handling it in a way that gets attention.

Also VERY important:

In Java (at least) by default when a thread throws an exception it is silently eaten, it doesn't crash your app or give you any indication that part of your program failed, but if that was a long-running thread, everything it powered is completely (invisibly) gone!

This can be fixed by installing a default exception handler, but be careful because without the handler or a try/catch, allowing such an exception to just silently kill a thread is the worst possible solution--it's possibly the most expensive thing you can do to another developer (or yourself!) I've spent weeks tracking down exceptions that were eaten by threads and empty catches!

","1788","","1788","","2020-01-07 17:16:08","2020-01-07 17:16:08","","","","0","","","","CC BY-SA 4.0" "195062","1","","","2013-04-16 05:18:43","","5","432","

I'm looking for a reference to clean coding styles that I can pass to a team member.

In particular, the rule that a method should not change its return type based on an input parameter. If you need different output, use a different method.

Example:

$invoice_items = getInvoiceItems();
$total         = getInvoiceItems( TRUE );

To me, this is bad coding style (and I'm not even talking about a parameter whose meaning can't be determined from the calling code).

The above example should actually be:

$total = totalInvoiceItems();

... where totalInvoiceItems() might call getInvoiceItems() to get the items it needs to total.

Where would I find a reference to this (and possibly other important coding style rules)?

","1737","","7422","","2013-04-16 05:59:15","2013-04-16 05:59:15","Is it OK to have a method return different types based on a parameter?","","2","1","","","","CC BY-SA 3.0" "195505","2","","64722","2013-04-19 18:43:10","","5","","

If the beginner has a good ""preface statement"" before opening a tool, they can often eliminate overwhelm by nipping it in the bud.

I was hired as an instructor for company teaching programming to complete noobs. (I use ""noobs"" as a term of endearment, aren't we all, in more areas than not?) When I say complete noobs, I'm serious. One first question I got was: ""when you say Right Click, what do you mean?""

One of the first things I learned, was to keep the tone of my voice exactly the same, regardless of how often that question had been asked, and regardless of how noobish it was. I learned to answer each question, as if it were equally valid, equally exciting, and equally valuable -- and as if it were the first time that question had ever been asked.

The Overwhelm-Eliminating preface statement I use, for any tool, goes like this: There is only one icon you will click on the opening screen -- at least for now. This tool you are about to open, wasn't built just for you, nor just for this task we are about to accomplish. The developers of this tool had to build it for a lot of people, who wanted to do a lot of things, and they had to include capability you will never use. That clutters things up. So you'll see a plethora of buttons and menu items, and anything you click may give you a new plethora of options. But every time you open this tool, in three clicks you can be doing what you want. I'm going to show you that easy secret path through the jungle of useless options.""

Unfortunately, I didn't always get to say any kind of preface before one of my students opened something that looked complicated. I could tell, because statements would start to come out of that student's mouth, like: ""I just wasn't cut out for SQL"" or ""I just don't have what it takes to be a Java programmer,"" or my one of my favorite lies: ""Crystal Reports is just too hard for me to learn."" What a crock! Lies, decided (and believed) in a moment of frustration, because they thought the tool was a huge forest, when really all they needed was to get to one of the treehouses, in one of the trees.

Here are a few of my preface statements I have used over the years:

  1. There are only 9 basic concepts you need to know, to be a programmer. They are each easy to learn. Every programming language allows you to implement these 9 concepts using a slightly different syntax. As you learn these concepts, and play with some different languages, you don't just become a C++ programmer, you become Programmer, who can program in any language, including (but not limited to) C++.

  2. Every task you want to do, can be broken down into single steps. Even those long complex lines of code, or complicated-looking command lines, are just made up of very easy to understand single steps. If you have a complicated task, just prove one step at a time, verifying each result, then start putting them together, one combination at a time, verifying each combination, until it is complete.

  3. Excellent programmers aren't really excellent programmers. They are excellent debuggers. If you don't get an error when you first compile, you should be worried. If you do, you're on the right path; you can start doing what you love.

  4. Find your ""heierarchy of resources"" and use it. Mine looks like this: If I need to do something, I try it. If it doesn't work, I experiment for a few minutes. If that doesn't work I use the Help file. If that doesn't work I use a web search. If that doesn't work I turn to the closest programmer next to me, and bounce it off them. If that doesn't work I call one of my friends, who is (or knows) and expert. If that doesn't work I hire somebody on one of those freelance sites, to teach me how to do it. I've only gotten to that last step twice in my life. [Learning this skill, kept the students from assigning me as their primary source for solutions.]

  5. Learning to learn, is more important than learning any subject. Learning to love learning is more important than learning any subject. If it takes you awhile to find your answers on the web, find a good tutorial on the web that teaches you how to do excellent web searches.

  6. Because Visual Studio Express is free, and you can drop a button on a window, and make it say ""Hello World"" in just a few moments, many Windows programmers start there. When you are learning, if you start with VB, at least also write the same program in C# at the same time. Preferably also Java and a couple of other languages as well. And it would be smart to also write it in at least one language that is command line based, like TCC (Tiny C Compiler.) I highly recommend using multiple languages, for each learning project, until you realize it's not the language you are learning, it's the programming. Otherwise you can end up limiting yourself and getting stuck in a mindset that isn't true.

  7. When asked if you can do something, the answer is unequivocally ""yes."" Learn to trust your learning skills enough to be able to say ""yes"" and mean it -- Then start learning.

  8. If you're not making mistakes, you aren't learning. Celebrate them. Then fix them.
    Enjoy the full process of learning. Enjoy working out the bugs. Become an expert.

I have many more of them, but it may be enough to know that, if you are frustrated, it's not the subject matter, nor the concepts, nor the tools, that are the problem. And it's not you, nor your intelligence, nor your ability, that is the problem. The problem is simply the way you are looking at it.

With the right perspective, nothing is overwhelming.

","88724","","","","","2013-04-19 18:43:10","","","","1","","","","CC BY-SA 3.0" "195768","2","","195750","2013-04-22 21:23:44","","5","","

Throw it out and start over.

  • It's impossible to write code that other people can't follow that you yourself can follow. Or that is even good. The code sounds not worth maintaining based on your few lines of description.
  • Sounds like the kind of person who would leave logic bombs. Do you want to defuse them?
  • Sounds like you're about to greatly underestimate the cost of maintaining software.
  • Fire the employee ASAP. He's already voiced that he thinks the best way to grow in the firm is to contribute as much negative value as possible. He is working at negative value currently.
","53263","","","","","2013-04-22 21:23:44","","","","2","","","","CC BY-SA 3.0" "196080","2","","196074","2013-04-25 12:01:02","","2","","

For what it's worth, the document you linked to gives an example case as justification:

Task op1 = FooAsync(); 
Task op2 = BarAsync(); 
await op1; 
await op2;

In this code, the developer is launching two asynchronous operations to run in parallel, and is then asynchronously waiting for each using the new await language feature...[C]onsider what will happen if both op1 and op2 fault. Awaiting op1 will propagate op1’s exception, and therefore op2 will never be awaited. As a result, op2’s exception will not be observed, and the process would eventually crash.

To make it easier for developers to write asynchronous code based on Tasks, .NET 4.5 changes the default exception behavior for unobserved exceptions. While unobserved exceptions will still cause the UnobservedTaskException event to be raised (not doing so would be a breaking change), the process will not crash by default. Rather, the exception will end up getting eaten after the event is raised, regardless of whether an event handler observes the exception.

I'm not convinced by this. It removes the possibility of an unambiguous, but hard to trace error (mysterious program crash that might occurs long after the actual error), but replaces it with the possibility of a completely silent error--which might become an equally hard to trace problem later on in your program. That seems like a dubious choice to me.

The behavior is configurable--but of course, 99% of developers are just going to use the default behavior, never thinking about this issue. So what they selected as the default is a big deal.

","","user82096","","","","2013-04-25 12:01:02","","","","3","","","","CC BY-SA 3.0" "304613","2","","304598","2015-12-09 02:07:38","","15","","

There are some approaches that would work better for some languages than others. For example, soundex (and another description I like) was designed for English pronunciations of names. With soundex, Michael becomes M240. This has several steps:

  1. First letter is isolated. (M and ichael)
  2. All vowels are removed from remainder (M and chl)
  3. Consonants are replaced
    • c -> 2
    • l -> 4
  4. Left pad zeros.

The grouping of the consonant conversions are based on their phonetic similarity - B, F, P and V all map to 1.

And there are variations on this over time. It is particularly useful in genealogy where the spelling of a name may change over time, but the pronunciation remains similar.


There is also approaches such as match rating which was developed by the airlines for names (rather than American genealogy).

The encoding of match rating approach (MRA) is:

  1. Delete all non-leading vowels (Michael becomes Mchl and Anthony becomes Anthny)
  2. Remove the second constant of any doubles
  3. If the string is longer than 6 characters, reduce the remaining string to 6 characters by taking the first three and last three.

The full specification for this can be found on archive.org - note that it is ""not small"" (the printed form is 214 pages).

The comparisons have a matching threshold based on how long the text is.

There are other phonetic algorithms too.


So, what I would encourage you to do is either take the soundex as is, take the match rating approach as is, or modify the soundex based on the Romanian consonants and Polish consonants.

Remember that with soundex, the consonants are grouped (In Polish, m, n, ɲ are all nasal consonants to be grouped, and you would likely group the labial, dental, and alveolar plosives - be they voiceless or voiced together - granted, I don't know Polish so don't know if I'm just saying things that aren't true there).

Then just covert all the names in the database to the two different soundex systems and find out what names have the lowest set of collisions in the different languages. This gives you distinct names. So that Smith doesn't show up as Smyth.


This, however, only solves the ""name likely to collide with other names and be misheard."" It doesn't address the other way of the ""name heard correctly, written down incorrectly"" and for that, one should focus their attention on common names.

For example, Michael was a very common name in the US from early 1950 to late 1970. It was really popular. However, for some reason, the name Micheal was kind of popular in the 1950s (got up to the 83rd most common name at its peak). And I am certain that people named Micheal constantly got their name misspelled.

Thus, you should focus on names where there is one name that dominates the popularity of the name for a given pronunciation. Glancing at another data consumer for the names by year, you can see that names beginning with Jam... for a boy are a mess with Jamaal, Jamal, Jamar and others. Incidentally, these names have slightly different soundexes for American (J540, J540 and J560 - the l and r are in different groups even though they are closely related in phonetics). However, for someone from, say Japan, the there is only one sound in the phonetic region where l and r are pronounced in American English. This may also pose a challenge with the leading consonants using soundex that one should be aware of (I once worked with a Japanese woman who called herself Risa (with an 'R') rather than Lisa as a Romanization of her Japanese name).

You will note that my examples are for the United States. That data is easily accessible. Apparently there are some things for Poland and Hungarian, and only hints at Hungarian name commonality... I suspect that searching in a language other than English might be helpful there.

So, given the soundex for a name, few collisions and the actual spelling is in the set of collisions. Preferably, this is a common name. Looking at that hungarian list, going with Krisztián would likely get misspellings while, Zoltán less likely so (#22 most common baby name in 2011 in Hungary!). That said, you can't go wrong with Michael.

","","user40980","","user40980","2015-12-09 19:50:43","2015-12-09 19:50:43","","","","4","","","","CC BY-SA 3.0" "196239","2","","196043","2013-04-26 13:05:34","","4","","

In my opinion what people colloquially consider a ""programming languages"" are actually three separate things:

  1. Language type and syntax
  2. Language IDE
  3. Available libraries for a language

For instance when somebody brings up C# in a discussion you may think he/she is talking about language syntax (1) but it's 95% certain that the discussion will involve .Net framework (3). If you are not designing a new language, it's hard and usually pointless to isolate (1) and ignore (2) and (3). That's because IDE and standard library are ""comfort factors"", things that directly affect the experience of using a certain tool.

Last few years I too participated in Google Code Jam. First time I opted for C++ because it has nice support for reading the input. For example reading three integers from a standard input in C++ looks like this:

int n, h, w;
cin >> n >> h >> w;

While in C# the same would look like this:

int n, h, w;
string[] tokens = Console.ReadLine().Split(' ');
n = int.Parse(tokens[0]);
h = int.Parse(tokens[1]);
w = int.Parse(tokens[2]);

That's a lot more mental overhead for a simple functionality. Things get even more complicated in C# with multiline input. Maybe I simply haven't figured out a better way back then. Anyway, I failed to pass the first round because I had a bug that I couldn't correct before the end of the round. Ironically the input reading method obfuscated the bug. Problem was simple, input contained a number that was too big for 32 bit integer. In C# int.Parse(string) would throw an exception but in C++ the file input stream would set a certain error flag and fail silently making unsuspecting developer unaware of a problem.

Both examples demonstrate how the library was used rather then language syntax. First one demonstrates the verbosity and the other demonstrates the reliability. Many libraries are ported to multiple languages and some languages can use libraries that are not specifically built for them (see @vartec's answer about Python with C libraries).

To wrap this up, knowing the right algorithm helps. In coding competitions it's crucial, especially when resources such as execution time and memory are purposely limited. In application development it's welcome but generally not crucial. Maintainability is more important there. It is can be achieved by applying correct design patterns, having good architecture, readable code and relevant documentation and all of those methods heavily depend on in-house and 3rd party libraries. So, I find it more important to know what kind of wheels are already invented and how do they fit then how to make my own.

","55197","","","","","2013-04-26 13:05:34","","","","3","","","","CC BY-SA 3.0" "196261","2","","196257","2013-04-26 16:11:23","","7","","

Work-related noise is common in Agile environments.

One might think that this would be a distracting environment. it would be easy to fear that you'd never get anything done, because of the constant noise and distraction. In fact, this doesn't turn out to be the case. Moreover, instead of interfering with productivity, a University of Michigan study suggested, working in a "war room" environment may increase productivity by a factor of 2.

Robert C. Martin - Agile Principles, Patterns, and Practices

On the other hand, non-work-related noise should be avoided at all costs.

Peopleware is the oft-quoted source for why thought-workers should be able to work in relative quiet. But, even Peopleware, after describing a working environment where regular tannoy announcements interrupt an entire office full of people to attract the attention of one, goes on to talk about perfect working environments where a team sit together in an office, with full control over where the desks and other furniture is positioned.

Peopleware suggests that every thought-worker should get a lot more space than most of us get, but it still doesn't suggest complete isolation. In fact, it explains in great detail how a team will develop its own cycles of noise and silence. My observations in teams isolated from the business, but not each other, have been the same.

","12828","","-1","","2020-06-16 10:01:49","2013-04-26 16:23:29","","","","8","","","","CC BY-SA 3.0" "196524","1","196532","","2013-04-29 13:32:36","","11","16589","

Preamble
My aim is to create reusable code for multiple projects (and also publish it on github) to manage subscriptions. I know about stripe and recurring billing providers, but that's not what this module is aiming for. It should just be a wrapper/helper for calculating account balance, easy notifications to renew a subscription, and handle price calculations.

There are countries you can't use recurring billing because of the providers or payment possibilities having poor or no support for it or are too expensive (micropayments). And there are people that don't want to use recurring billing but pay their bill manually / avingg an invoice at the end of the year. So please don't suggest paypal recurring billing, recurly or similar services.

Situation
Let's say you have a model that can subscribe to a subscription plan (e.g. User). This model has a field that stores the identifier of a subscription plan it is currently subscribed to. So, on every plan change, the change is recorded.

There is a model (e.g. SubscriptionPlanChanges) with the following fields recording the mentioned changes:

  • subscriber relating to the subscribing model (User in this case)
  • from_plan defining the plan identifier the model had before change
  • to_plan defining the plan identifier the model has selected now
  • created_at is a date-time field storing the change
  • valid_until stores the date until the actual subscription is valid
  • paid_at is also a date-time field that defines if (and when) subscription was paid

Of course, that layout is discussable.

Question of account balance
When a User changes his/her subscription plan, I need to compare the plan fields, get the pricings, and calculate the deduction for the new plan based on the current plan's valid_until and its price. Say: You subscribed for a year of plan A but after 6 months, you upgrade to plan B, so you get a deduction of half the paid price for the 6 months of plan A.

What I am wondering: If a user e.g. switches to the free plan, he has a credit which can be deducted if the user wants to switch again. Would you cache that value in an additional field, or calculate through all the records related to that user every time? Would you add/change something about the table layout?

Question of easy comprehensibility
When the end of a subscription period arrives, the user gets notified and has the possiblity to renew his subscription by paying again. The easiest way would be to just update paid_at and valid_until with new subscription options. However, I am not sure if you store every data someone might need, like a payment/subscription history.

Another option would be to create an additional record for this, where from_plan and to_plan are having the same identifier (thus symbolizing ""no change""). But wouldn't that interfer with calculating the account balance in some way?

If someone could point me into the right direction about the logics handling such subscriptions, I'd appreciate it very much.


UPDATE
Thanks for the help by now. I think my question was too vague so I'll try to be more precisely by using less abstraction. Unfortunately, I could not solve my problem yet.

Case A
User can select Subscription Plan A. This currently stores a SubscriptionPlanChange to keep track of it. After e.g. 5 months, User upgrades his subscription to Subscription Plan B. So he pays the price for his new subscription, deducting the price of plan a for the unused 7 months.

Case B
After 3 months, User rolls back to his Subscription Plan A. He does not have to pay but receives a balance for it so that, at the end of the subscription, he gets that balance deducted for his new subscription.

Case C
User can select a subscription plan for a sub-service that has independent subscription plans. Same Case A and Case B can apply for that sub-service subscription.

_Case D_ User cancels one of his subscriptions. This results in a top up of his balance.

My question (currently, at least) mainly depends on how to store that data properly so I can reproduce a history of subscriptions for business analysis and calculate balances, get outstanding payments based on the subscriptions etc.

I am also not sure if the balance should be stored in e.g. the users model itself, or if it is not stored but can be calculated any time based on the stored data / history.

Some things to note, although I don't think that they should introduce problems:

  • It does not have to be a User, it could be anything, that's why the Subscriber is polymorphic
  • Plans do not necessarily have to be plans, but could be e.g. Magazines like mentioned. That's what I've described with Case C and Case D.
","27714","","27714","","2013-04-29 18:05:34","2013-10-15 22:19:57","Handling subscriptions, balances and pricing plan changes","","2","3","10","2015-07-22 12:39:07","","CC BY-SA 3.0" "305051","2","","304878","2015-12-14 14:55:01","","5","","

So I tried to do a bit of research on this by looking for PDP-10 / TOPS-10 manuals in order to find out what the state of the art was before pipes. I found this, but TOPS-10 is remarkably hard to google. There are a few good references on the invention of the pipe: an interview with McIlroy, on the history and impact of UNIX.

You have to put this into historical context. Few of the modern tools and conveniences we take for granted existed.

"At the start, Thompson did not even program on the PDP itself, but instead used a set of macros for the GEMAP assembler on a GE-635 machine."(29) A paper tape was generated on the GE 635 and then tested on the PDP-7 until, according to Ritchie, "a primitive Unix kernel, an editor, an assembler, a simple shell (command interpreter), and a few utilities (like the Unix rm, cat, cp commands) were completed. At this point, the operating system was self-supporting, programs could be written and tested without resort to paper tape, and development continued on the PDP-7 itself."

A PDP-7 looks like this. Note the lack of an interactive display or hard disk. The "filesystem" would be stored on the magnetic tape. There was up to 64kB of memory for programs and data.

In that environment, programmers tended to address the hardware directly, such as by issuing commands to spin up the tape and process characters one at a time read directly from the tape interface. UNIX provided abstractions over this, so that rather than "read from teletype" and "read from tape" being separate interfaces they were combined into one, with the crucial pipe addition of "read from output of other program without storing a temporary copy on disk or tape".

Here is McIlroy on the invention of grep. I think this does a good job of summing up the amount of work required in the pre-UNIX environment.

"Grep was invented for me. I was making a program to read text aloud through a voice synthesizer. As I invented phonetic rules I would check Webster's dictionary for words on which they might fail. For example, how do you cope with the digraph 'ui', which is pronounced many different ways: 'fruit', 'guile', 'guilty', 'anguish', 'intuit', 'beguine'? I would break the dictionary up into pieces that fit in ed's limited buffer and use a global command to select a list. I would whittle this list down by repeated scannings with ed to see how each proposed rule worked."

"The process was tedious, and terribly wasteful, since the dictionary had to be split (one couldn't afford to leave a split copy on line). Then ed copied each part into /tmp, scanned it twice to accomplish the g command, and finally threw it away, which takes time too."

"One afternoon I asked Ken Thompson if he could lift the regular expression recognizer out of the editor and make a one-pass program to do it. He said yes. The next morning I found a note in my mail announcing a program named grep. It worked like a charm. When asked what that funny name meant, Ken said it was obvious. It stood for the editor command that it simulated, g/re/p (global regular expression print)."

Compare the first part of that to the cat names.txt | awk '{print $2 ", " $1}' | sort | uniq | column -c 100 example. If your options are "build a command line" versus "write a program specifically for the purpose, by hand, in assembler", then it's worth building the command line. Even if it takes a few hours of reading the (paper) manuals to do it. You can then write it down for future reference.

","29972","","-1","","2020-06-16 10:01:49","2015-12-14 14:55:01","","","","0","","","","CC BY-SA 3.0" "196535","1","","","2013-04-29 15:13:07","","8","1123","

In this blog post about acceptance criteria the author explains that good acceptance criteria should:

  • State an intent not a solution (e.g. “The user can choose an account” rather than “The user can select the account from a drop-down”)

  • Are independent of implementation (ideally the phrasing would be the same regardless whether this feature/story would be implemented on e.g. web, mobile or a voice activated system)

  • Are relatively high level (not every detail needs to be in writing)

And further details such as:

  • The column heading is “Balance”
  • The rolling balance format is 99,999,999,999.9 D/CR
  • We should use a dropdown rather than checkboxes

should be moved to either a Team internal documentation or Automated acceptance tests

However, I often hear people frowning about using Cucumber or similar frameworks for doing GUI tests. Moreover, using an internal documentation could generate lots of problems due to failure to update the documentation regularly.

I'm still struggling to find an effective way to capture such details during the conversation with the customer.

","50440","","50440","","2013-04-30 16:02:50","2013-04-30 17:16:12","Where to put details about the acceptance criteria of a user story?","","4","0","0","","","CC BY-SA 3.0" "196881","2","","110979","2013-05-02 20:38:37","","24","","

I'd like to add an answer to this question as I've been trudging through some good, bad but mostly ugly Java lately and I have a whole new whopper-load of gross over-generalizations about Java and Java devs vs. JS and JS devs that might actually be based in something vaguely resembling useful truth.

There Are IDEs But It Can Be Helpful to Understand Why There Haven't Been Many

I've been trying Webstorm out now that I find myself drawn to Node development and it's not-bad-enough that I actually bought it but I still tend to open js files in Scite more often than WS. The reason for this is that you can do a lot more with a lot less in JS but also because UI work gives immediate feedback, browser dev tools (Chrome's and Firebug in particular) are actually quite excellent, and (accounting for non-browser contexts) re-running altered code is fast and easy without a compile step.

Another thing I'm fairly convinced of is that IDEs basically create their own demand by enabling sloppy code which you really can't afford in JavaScript. Want to learn how we manage in JS? It might help to start by trying to write something non-trivial in Java without an IDE and pay close attention to the things that you have to start doing and think about in order to actually be able to maintain/modify that code without an IDE moving forward. IMO, those same things are still critical to writing maintainable code whether you have an IDE or not. If I had to write a 4-year programming curriculum, it wouldn't let you touch an IDE for the first two years in the interest of not getting tools and dependencies twisted.

Structure

Experienced JS devs dealing with complex applications can and do structure their code. In fact it's one thing we tend to have to be better at with an early history that lacked IDEs to read the code for us but also because powerfully expressive languages can powerfully express completely unmaintainable disaster codebases very quickly if you don't code thoughtfully.

I actually had a fairly steep learning curve in understanding our Java codebase recently until I finally realized that none of it was proper OOP. Classes were nothing more than bundles of loosely related methods altering globally available data sitting around in beans or DTOs or static getters/setters. That's basically the same old beast that OOP was supposed to replace. So I stopped looking and thinking about the code basically. I just learned the shortcut keys and traced through the messes and everything went much more smoothly. So if you're not in the habit already, think a lot harder about OOD.

A well-structured JS app at the highest level will tend to consist of complex functions (e.g. jQuery) and objects interacting with each other. I would argue that the mark of a well-structured, easily maintained app in any language is that it's perfectly legible whether you're looking at it with an IDE or Notepad++. It's one of the main reasons I'm highly critical of dependency injection and test-first TDD taken to the extreme.

And finally, let go of classes. Learn prototypal inheritance. It's actually quite elegant easy to implement when you actually need inheritance. I find compositing approaches tend to work much better in JS, however, and I personally start to get ill and have EXTJS night-terrors any time I see more than one or two levels of inheritance going on in any language.

Core Principles First

I'm talking about the core stuff that all other good practices should derive from: DRY, YAGNI, the principle of least astonishment, clean separation of problem domains, writing to an interface, and writing human legible code are my personal core. Anything a little more complex that advocates the abandonment of those practices should be considered Kool Aid in any language, but especially a language like JavaScript where it's powerfully easy to leave a legacy of very confusing code for the next guy. Loose coupling, for instance, is great stuff until you take it so far that you can't even tell where interaction between objects is happening.

Don't Fear Dynamic Typing

There aren't a lot of core types in JavaScript. For the most part, dynamic casting rules are practical and straight-forward but it pays to learn them so you can better learn to manage data flow without needless casts and pointless validation routines. Trust me. Strict types are great for performance and spotting problems on compile but they don't protect you from anything.

Learn the Crap out of JS Functions and Closures

JS's first-class functions are arguably the main reason JS won the ""Only Language Worth Touching the Client-Side Web With Award."" And yes, there actually was competition. They're also a central feature of JS. We construct objects with them. Everything is scoped to functions. And they have handy features. We can examine params via the arguments keyword. We can temporarily attach and fire them in the context of being methods of other objects. And they make event-driven approaches to things obscenely easy to implement. In short, they made JS an absolute beast at reducing complexity and adapting varying implementations of JS itself (but mostly the DOM API) right at the source.

Re-Evaluate Patterns/Practices Before Adopting

First class functions and dynamic types render a lot of the more complex design patterns completely pointless and cumbersome in JS. Some of the simpler patterns, however, are incredibly useful and easy to implement given JS's highly flexible nature. Adapters and decorators are particularly useful and I've found singletons helpful for complex ui widget factories that also act as event-managers for the ui elements they build.

Follow the Language's Lead and Do More With Less

I believe one of the Java head honchos makes the argument somewhere that verbosity is actually a positive feature that makes code easier to understand for all parties. Hogwash. If that were true, legalese would be easier to read. Only the writer can make what they've written easier to understand and you can only do that by putting yourself in the other guy's shoes occasionally. So embrace these two rules. 1. Be as direct and clear as possible. 2. Get to the damn point already. The win is that clean, concise code is orders of magnitude easier to understand and maintain than something where you have to traverse twenty-five layers to get from the trigger to the actual desired action. Most patterns that advocate that sort of thing in stricter languages are in fact workarounds for limitations that JavaScript doesn't have.

Everything is Malleable and That's Okay

JS is probably one of the least protectionist languages in popular use. Embrace that. It works fine. For instance you can write objects with inaccessible persistent ""private"" vars by simply declaring regular vars in a constructor function and I do this frequently. But it's not to protect my code or users of it ""from themselves"" (they could just replace it with their own version during run-time anyway). But rather it's to signal intent because the assumption is that the other guy is competent enough to not want to mangle any dependencies and will see that you're not meant to get at it directly perhaps for a good reason.

There Are No Size Limits, Only Problem Domains

The biggest problem I have with all the Java codebases I've seen is an overabundance of class files. First of all SOLID is just a confusing reiteration of what you should already know about OOP. A class should handle a specific set of related problems. Not one problem with one method. That's just taking bad old chaining func-spaghetti C code only with the addition of all the pointless class syntax to boot. There is no size or method limit. If it makes sense to add something to an already long function or class or constructor, it makes sense. Take jQuery. It's an entire library-length toolset in a single function and there is nothing wrong with that. Whether we still need jQuery is up to reasonable debate but in terms of design, you can learn a hell of a lot about how to write effective JavaScript by understanding how JQ is architected for minimal memory usage/performance impact through slick use of closures and the prototype property.

If Java is All You Know, Dabble in Something With a Non-C-Based Syntax

When I started messing with Python because I liked what I was hearing about Django, I learned to start separating syntax from language design. As a result, it became easier to understand Java and C as a sum of their language design parts rather than a sum of things they do differently with the same syntax. A nice side-effect is that the more you understand other languages in terms of design, the better you'll understand the strengths/weaknesses of the one you know best through contrast.

Conclusion

Now, considering all of that, lets hit all your problem-points:

  • No immediate way of finding a function's entry point (other than a plain text search, which may then result in a subsequent searches for methods further up the call hierarchy, after two or three of which you've forgotten where you started)

Chrome and Firebug do actually have call-tracing. But see also my points on structure and keeping things concise and direct. The more you can think of your app as larger well-encapsulated constructs interacting with each other, the easier it is to figure whose fault it is when things go wrong. I'd say this is true of Java too. We have class-like function constructors that are perfectly serviceable for traditional OOP concerns.

function ObjectConstructor(){
    //No need for an init method.
    //Just pass in params and do stuff inside for instantiation behavior

    var privateAndPersistent = true;

    //I like to take advantage of function hoisting for a nice concise interface listing
    this.publicAndPointlessEncapsulationMurderingGetterSetter
    = publicAndPointlessEncapsulationMurderingGetterSetter;
    //Seriously though Java/C# folks, stop with the pointless getter/setters already

    function publicAndPointlessEncapsulationMurderingGetterSetter(arg){
        if(arg === undefined){
            return privateAndPersistent;
        }
        privateAndPersistent = arg;
    }

}

ObjectConstructor.staticLikeNonInstanceProperty = true;

var instance = new ObjectConstructor();//Convention is to  capitalize constructors

In my code, I almost never use the object literals {} as structural app components since they can't have internal (private) vars and prefer instead to reserve them for use as data structures. That helps set an expectation that maintains clarity of intent. (if you see curlies, it's data, not a component of app architecture).

  • Parameters are passed in to functions, with no way of knowing what properties and functions are available on that parameter (other than actually running the program, navigating to the point at which the function is called, and using console.logs to output all the properties available)

Again, see modern browser tools. But also, why is it such a bummer to run the program again? Reload is something a client-side web dev typically hits every few minutes because it costs you absolutely nothing to do it. This is again, another point that app structure can be helpful with but it is one down-side tradeoff of JS that you have to run your own validation when enforcing contracts is critical (something I only do at endpoints exposed to other things my codebase doesn't control). IMO, the tradeoff is well worth the benefits.

  • Common usage of anonymous functions as callbacks, which frequently leads to a spaghetti of confusing code paths, that you can't navigate around quickly.

Yeah that's bad on anything non-trivial. Don't do that. Name your functions kids. It's easier to trace things as well. You can define, evaluate (required to assign), and assign a simple trivial function in-line with:

doSomethingWithCallback( (function callBack(){}) );

Now Chrome will have a name for you when you're tracing through calls. For non-trivial func I would define it outside of the call. Also note that anonoymous functions assigned to a variable are still anonymous.

  • And sure, JSLint catches some errors before runtime, but even that's not as handy as having red wavy lines under your code directly in the browser.

I never touch the stuff. Crockford's given some good things to the community but JSLint crosses the line into stylistic preferences and suggesting certain elements of JavaScript are bad parts for no particularly good reason, IMO. Definitely ignore that one thing about regEx and negation classes followed by * or +. Wildcards perform more poorly and you can easily limit the repetition with {}. Also, ignore anything he says about function constructors. You can easily wrap them in a factory func if the new keyword bothers you. CSSLint (not Crockford's) is even worse on the bad advice front. Always take people who do a lot of speaking engagements with a grain of salt. Sometimes I swear they're just looking to establish authority or generate new material.

And again, you must unlearn what you have learned with this run-time concern you have. (it's a common one I've seen with a lot of Java/C# devs) If seeing errors in run-time still bothers you 2 years later, I want you to sit down and spam reload in a browser until it sinks in. There is no compile-time/run-time divide (well not a visible one anyway - JS is run on a JIT now). It's not only okay to discover bugs at run-time, it's hugely beneficial to so cheaply and easily spam reload and discover bugs at every stopping point you get to.

And get crackin' on those Chrome dev tools. They're built-in directly to webkit. Right-click in Chrome. Inspect element. Explore the tabs. Plenty of debug power there with the ability to alter code in the console during run-time being one of the most powerful but less obvious options. Great for testing too.

On a related note, errors are your friends. Don't ever write an empty catch statement. In JS we don't hide or bury errors (or at least we shouldn't cough YUI /cough). We attend to them. Anything less will result in debug pain. And if you do write a catch statement to hide potential errors in production at least silently log the error and document how to access the log.

","27161","","27161","","2013-05-02 23:08:05","2013-05-02 23:08:05","","","","1","","","","CC BY-SA 3.0" "197202","2","","197174","2013-05-06 18:54:59","","7","","

Overly optimistic scheduling is the term used by Steve McConnell in his book Rapid Development. He also uses the term wishful thinking.

McConnell writes (with supporting evidence) that an overly optimistic schedule actually makes the project later.


I had a bad experience with schedule pressure once. A customer was unhappy with a timeline, even though it was in line with similar projects. We hired a contract programmer with the hopes of shortening the timeline. The customer added many more features. (There was no review of ""hey, the project is taking too long, and now you want more features?"") The contract programmer was not as good as we had hoped, and I spent lots of time fixing bugs and design problems.

I don't want to live through that experience again as much as I can help it.

Yes, you should look for creative ways to help the sales team and management meet their commitments. Sales pay your salary.

Yes, you should be politely and firmly speak up about unrealistic schedules. The schedule may not change, but you did what you could. You owe it to your management to give them your best professional opinion, even when it is not what they want to hear.

","60569","","","","","2013-05-06 18:54:59","","","","1","","","","CC BY-SA 3.0" "197679","2","","197675","2013-05-10 17:28:45","","55","","

Your boss may be correct: you may be ""underperforming"" (more on that in a minute). But it may not be just your level of competence that's to blame. I don't think it would be a reach to suggest forces outside your control are causing you stress, which is having a negative effect on your performance.

Let's have a look at a few of the reasons your boss may now be bringing this up:

Culture and Politics

There may be forces beyond your control requiring your boss to now voice his concern. It's important to understand the system you are working in. Your job is to make your boss look good. The only way to do that is to understand the pressures he/she is under.

Ability

It's possible that ability is not up to par, as you say he openly stated. Here is what I would do in this situation:

Get specific feedback from your boss on how he measures performance. Are you not closing as many bugs as person X? Is there a set number of bugs you should be solving? If you are working alone then you need to make sure that the people measuring your performance are measuring it fairly and not based on some preconceived idea.

If your performance is slow and based on a real gap, identify that gap and put a detailed plan together with your boss with the aim of closing it.

This review is also a good opportunity to bring up the fact that you are not happy. It's good that you've identified that you don't love this job. But figure out why. What part of your job do you like and what don't you? It might be that this job isn't for you...

","2789","","39006","","2013-08-30 20:09:18","2013-08-30 20:09:18","","","","6","","","2013-05-28 14:35:24","CC BY-SA 3.0" "306648","2","","129530","2016-01-06 11:35:54","","1","","

Physical Leaks

The kind of bugs that GC addresses seem (at least to an external observer) the kind of things that a programmer that knows well his language, libraries, concepts, idioms, etc, wouldn't do. But I could be wrong: is manual memory handling intrinsically complicated?

Coming from the C end which makes memory management about as manual and pronounced as possible so that we're comparing extremes (C++ mostly automates memory management without GC), I'd say ""not really"" in the sense of comparing to GC when it comes to leaks. A beginner and sometimes even a pro may forget to write free for a given malloc. It definitely does happen.

However, there are tools like valgrind leak detection which will immediately spot, on executing the code, when/where such mistakes occur down to the exact line of code. When that's integrated into the CI, it becomes almost impossible to merge such mistakes, and easy as pie to correct them. So it's never a big deal in any team/process with reasonable standards.

Granted, there might be some exotic cases of execution that flys under the radar of testing where free failed to be called, perhaps on encountering an obscure external input error like a corrupt file in which case maybe the system leaks 32 bytes or something. I think that can definitely happen even under pretty good testing standards and leak detection tools, but it would also not be quite so critical to leak a little bit of memory on something that almost never happens. We'll see a much bigger issue where we can leak massive resources even in common execution paths below in a way that GC can't prevent.

It's also difficult without something resembling a pseudo-form of GC (reference counting, e.g.) when the lifetime of an object needs to be extended for some form of deferred/asynchronous processing, perhaps by another thread.

Dangling Pointers

The real issue with more manual forms of memory management is not leaks to me. How many native applications written in C or C++ do we know of that are really leaky? Is the Linux kernel leaky? MySQL? CryEngine 3? Digital audio workstations and synthesizers? Does Java VM leak (it's implemented in native code)? Photoshop?

If anything, I think when we look around, the leakiest applications tend to be the ones written using GC schemes. But before that's taken as a slam on garbage collection, native code has a significant issue that's not related at all to memory leaks.

The issue for me was always safety. Even when we free memory through a pointer, if there are any other pointers to the resource, they will become dangling (invalidated) pointers.

When we try to access the pointees of those dangling pointers, we end up running into undefined behavior, though almost always a segfault/access violation leading to a hard, immediate crash.

All those native applications I listed above potentially have an obscure edge case or two which can lead to a crash primarily because of this issue, and there are definitely a fair share of shoddy applications written in native code which are very crash-heavy, and often in large part due to this issue.

... and it's because resource management is hard regardless of whether you use GC or not. The practical difference is often either leaking (GC) or crashing (without GC) in the face of a mistake leading to resource mismanagement.

Resource Management: Garbage Collection

Complex resource management is a difficult, manual process no matter what. GC can't automate anything here.

Let's take an example where we have this object, ""Joe"". Joe is referenced by a number of organizations to which he is a member. Every month or so they extract a membership fee from his credit card.

We also have one reference to Joe to control his lifetime. Let's say, as programmers, we no longer need Joe. He's starting to pester us and we no longer need these organizations he belongs to waste their time dealing with him. So we attempt to wipe him off the face of the earth by removing his lifeline reference.

... but wait, we're using garbage collection. Every strong reference to Joe will keep him around. So we also remove references to him from the organizations to which he belongs (unsubscribing him).

... except whoops, we forgot to cancel his magazine subscription! Now Joe remains around in memory, pestering us and using up resources, and the magazine company also ends up continuing to process Joe's membership every month.

This is the main mistake which can cause a lot of complex programs written using garbage collection schemes to leak and start using up more and more memory the longer they run, and possibly more and more processing (the recurring magazine subscription). They forgot to remove one or more of those references, making it impossible for the garbage collector to do its magic until the entire program is shut down.

The program doesn't crash, however. It's perfectly safe. It's just going to keep hogging up memory and Joe will still linger around. For many applications, this kind of leaky behavior where we just throw more and more memory/processing at the issue might be far preferable to a hard crash, especially given how much memory and processing power our machines have today.

Resource Management: Manual

Now let's consider the alternative where we use pointers to Joe and manual memory management, like so:

These blue links don't manage Joe's lifetime. If we want to remove him from the face of the earth, we manually request to destroy him, like so:

Now that would normally leave us with dangling pointers all over the place, so let's remove the pointers to Joe.

... whoops, we made the exact same mistake again and forgot to unsubscribe Joe's magazine subscription!

Except now we have a dangling pointer. When the magazine subscription tries to process Joe's monthly fee, the entire world will explode -- typically we get the hard crash instantly.

This same basic resource mismanagement mistake where the developer forgot to manually remove all pointers/references to a resource can lead to a lot of crashes in native applications. They don't hog up memory the longer they run typically because they will often outright crash in this case.

Real-World

Now the above example is using a ridiculously simple diagram. A real-world application might require thousands of images stitched together to cover a full graph, with hundreds of different types of resources stored in a scene graph, GPU resources associated to some of them, accelerators tied to others, observers distributed across hundreds of plugins watching a number of entity types in the scene for changes, observers observing observers, audios synced to animations, etc. So it might seem like it's easy to avoid the mistake I described above, but it's generally nowhere near this simple in a real-world production codebase for a complex application spanning millions of lines of code.

The chance that someone, some day, will mismanage resources somewhere in that codebase tends to be quite high, and that probability is the same with or without GC. The main difference is what will happen as a result of this mistake, which also affects potentially affects how quickly this mistake will be spotted and fixed.

Crash vs. Leak

Now which one is worse? An immediate crash, or a silent memory leak where Joe just mysteriously lingers around?

Most might answer the latter, but let's say this software is designed to be run for hours on end, possibly days, and each of these Joe's and Jane's we add increases the memory usage of the software by a gigabyte. It's not a mission-critical software (crashes don't actually kill users), but a performance-critical one.

In this case, a hard crash that immediately shows up when debugging, pointing out the mistake you made, might actually be preferable to just a leaky software that might even fly under the radar of your testing procedure.

On the flip side, if it is a mission-critical software where performance isn't the goal, just not crashing by any means possible, then leaking might actually be preferable.

Weak References

There is kind of a hybrid of these ideas available in GC schemes known as weak references. With weak references, we can have all these organizations weak-reference Joe but not prevent him from being removed when the strong reference (Joe's owner/lifeline) goes away. Nevertheless, we get the benefit of being able to detect when Joe is no longer around through these weak references, allowing us to get an easily-reproducible error of sorts.

Unfortunately weak references aren't used nearly as much as they probably should be used, so often a lot of complex GC applications might be susceptible to leaks even if they're potentially far less crashy than a complex C application, e.g.

In any case, whether or not GC makes your life easier or harder depends on how important it is for your software to avoid leaks, and whether or not it deals with complex resource management of this sort.

In my case, I work in a performance-critical field where resources do span hundreds of megabytes to gigabytes, and not releasing that memory when users request to unload because of a mistake like the above can actually be less preferable to a crash. Crashes are easy to spot and reproduce, making them often the programmer's favorite kind of bug, even if it's the user's least favorite, and a lot of these crashes will show up with a sane testing procedure before they even reach the user.

Anyway, those are the differences between GC and manual memory management. To answer your immediate question, I would say manual memory management is difficult, but it has very little to do with leaks, and both GC and manual forms of memory management are still very difficult when resource management is non-trivial. The GC arguably has more tricky behavior here where the program appears to be working just fine but is consuming more and more and more resources. The manual form is less tricky, but is going to crash and burn big time with mistakes like the one shown above.

","","user204677","","user204677","2016-01-06 12:29:14","2016-01-06 12:29:14","","","","1","","","","CC BY-SA 3.0" "306705","2","","216289","2016-01-06 22:54:01","","3","","

This is Outright Dangerous!

I worked under a senior developer in a C codebase with the shoddiest ""standards"" who pushed for the same thing, to blindly check all pointers for null. The developer would end up doing things like this:

// Pre: vertex should never be null.
void transform_vertex(Vertex* vertex, ...)
{
    // Inserted by my ""wise"" co-worker.
    if (!vertex)
        return;
    ...
}

I once tried removing such a check of the precondition one time in such a function and replacing it with an assert to see what would happen.

To my horror, I found thousands of lines of code in the codebase which were passing nulls to this function, but where the developers, likely confused, worked around and just added more code until things worked.

To my further horror, I found this issue was prevalent in all sorts of places in the codebase checking for nulls. The codebase had grown over decades to come to rely on these checks in order to be able to silently violate even the most explicitly-documented preconditions. By removing these deadly checks in favor of asserts, all the logical human errors over decades in the codebase would be revealed, and we would drown in them.

It only took two seemingly-innocent lines of code like this + time and a team to end up masking a thousand accumulated bugs.

These are the kinds of practices that make bugs depend on other bugs to exist in order for the software to work. It's a nightmare scenario. It also makes every logical error related to violating such preconditions show up mysteriously a million lines of code away from the actual site in which the mistake occurred, since all these null checks just hide the bug and hide the bug until we reach a place that forgot to hide the bug.

To simply check for nulls blindly in every place where a null pointer violates a precondition is, to me, utter insanity, unless your software is so mission-critical against assertion failures and production crashes that the potential of this scenario is preferable.

Is it reasonable for a coding standard to require that every single pointer dereferenced in a function be checked for NULL first—even private data members?

So I'd say, absolutely not. It's not even ""safe"". It may very well be the opposite and mask all kinds of bugs throughout your codebase which, over the years, can lead to the most horrific scenarios.

assert is the way to go here. Violations of preconditions should not be allowed to go unnoticed, or else Murphy's law can easily kick in.

","","user204677","","user204677","2016-01-06 23:25:16","2016-01-06 23:25:16","","","","6","","","","CC BY-SA 3.0" "406561","2","","406554","2020-03-15 11:04:01","","1","","

It looks like you simply have 'lite' and 'full' objects, which is a pretty common anti-pattern.

Whether your object has a child object or just an Id reference is generally governed by deeper concerns than just convenience of display.

After all you can retrieve the child object separately and use it just as easily, if not more easily in most cases.

Does your Job object have methods which call child object methods?

Are child objects shared over multiple parents?

Can you update the underlying data source atomically with the child objects attached?

Can you read the underlying data efficiently to populate your chosen structure?

You may have fallen into the trap of OOP data objects, ie Cat, Dog are Animal, Animals have Legs because they are similar; rather than 'programmatic' OOP ie Array implements IEnumerable and has Items because it needs to in order to function

The alternative is to split your objects logically, for pure data this would probably be mostly 'lite' objects, and assemble as required for the function you are working on.

ie

PrintInvoice(jobId) //need child data
{
    Job job = repo.GetJob(jobId)
    Teams teams = repo.getTeamsForJob(job.Id);

    print job.Title
    foreach(id in job.TeamIds)
    {
        print teams[id].TeamName
        print teams[id].Price
    }
}

ListJobs() ///just need top level data
{
    var jobs = repo.GetJobs()
    foreach(j in jobs)
    {
       print j.Title
    }
}

Now I can assemble the information from both 'lite' and 'full' objects as I require for a specific task, I've saved huge amounts of code, If i have one team that's in two jobs, then I don't have two copies of it, if I want to list all the teams over multiple jobs I don't have to drill down to child objects and dedupe to get them etc etc

","177980","","177980","","2020-03-15 12:48:22","2020-03-15 12:48:22","","","","6","","","","CC BY-SA 4.0" "306958","2","","306955","2016-01-09 08:20:17","","20","","

The declarative code is harder to debug.

I would say that is a function of the quality of your debugger. If your debugger understands the imperative constructs but not the declarative ones, then of course the delarative ones are harder to debug. But you could easily imagine a different debugger with different priorities, where the opposite is true.

There are some language designers which do care so much about tooling that they are even willing to let toolability influence the language design or even compromise on language features to facilitate good tools. The obvious example is Kotlin, which is designed by a tool vendor (JetBrains). The lead developers of Scala are also famously opposed to expanding Scala's type inference, not because they don't know how to do it (they do) or because it's hard to implement (it is, but they have smart compiler writers), but because they haven't figured out a way to implement it with good error messages. (Think mid-90s C++ template instantiation errors.)

The declarative code is a bit slower. […] With declarative it seems like you can easily lose touch with what your code is actually doing.

Yes. That is the whole point. That's why it's called ""declarative"": because you declare what you want to happen, not how you want it to happen.

This gives a lot more leeway for the compiler to optimize things.

There's a great example in one of the Supero (a supercompiler for Haskell) papers. The author compares a simple, expressive, declarative, purely functional, one-line word count function in Haskell (main = print . length . words =<< getContents), compiled with a combination of Supero, GHC, and YHC with a hand-optimized state-machine-based while loop in C, and much to his own surprise finds that the Haskell is marginally faster. How could that happen? Well, the compiler actually transformed the Haskell code into the same state-machine loop that the hand-written C version has, but it can do one additional trick that C (at least without inline assembly) can't: encode the state(s) in the program counter.

In your case, you have created a declarative DSL, if you will. But the C++ optimizer doesn't know anything about the semantics of your DSL, so it can't take advantage of the additional freedom.

The declarative code is harder to modify over time, I find. If I want to do some extra operation on each character, I need to add another […] lambda etc. In imperative programming I just add a line of plain ol' code inside the loop […].

I don't follow. Is there really difference between:

step1();
step2();
step2a(); // inserted later
step3();

and

transform1    | 
  transform2  |
  transform2a | // inserted later
  transform3;

I find the imperative style more intuitive as I'm writing.

There is no such (absolute) thing as ""intuitive"". Intuitiveness is all about familiarity. Remember the Star Trek Movie, when Scotty tries to use a computer with what we consider to be an intuitive user interface? He ends up trying to speak voice commands into the mouse.

A lot of people consider loops to be intuitive, and recursion un-intuitive. However, just a couple of months ago, there was a question in the Ruby tag on StackOverflow by a complete programming newbie, who had written code like this:

def main
  # do something
  main
end

To him, this was the intuitive way to do something over and over again. (And why not? ""Do something, and then start again what you are doing"" is a perfectly sensible mental model for what we imperative guys call a ""loop"", is it not?) And for a Scheme, ML, or Haskell programmer, this would be intuitive, and loops wouldn't. (In fact, a pure ML or Haskell programmer wouldn't even know what we are talking about, because their languages have no loops.)

Another example from me personally: as a Ruby programmer and fan of Smalltalk, I cannot understand why anyone would ever want a static AOT compiler. And yet, the C++ community cannot understand why anyone would ever want a dynamic JIT compiler.

Unless and until you have written the same amount of (serious, non-toy, complex, large, production-level system) code in both styles, the style which is more familiar will be more ""intuitive"". That's just the nature of things.

","1352","","","","","2016-01-09 08:20:17","","","","4","","","","CC BY-SA 3.0" "307024","2","","299653","2016-01-10 14:08:40","","13","","

By discarding REST, you lose much more than just HATEOAS. If your microservices are public (and it's a good idea for them to be public or at least tend towards being public one day¹), using anything other than REST and SOAP would be problematic:

  • Some developers never used AMQP,

  • Some have used AMQP, but are often much more familiar with REST and SOAP,

  • AMQP libraries for some languages are not particularly straightforward,

  • Manual experimentation with the service is very limited: I can use CURL to do any request to Amazon S3; what should I install on my machine if I want to play with an AMQP variant of S3?

  • Debugging REST and SOAP is easy. I just track the HTTP exchanges and analyze them. Not sure what tools should I use to see to debug AMQP exchanges.

AMQP is great, but it's done for a very specific purpose of exchanges based on events. While it's technically possible to do RPC with AMQP, it's not its primary purpose.

The asynchronous aspect is important too. Sometimes it's a benefit: I don't want to block the user interface of an app while doing requests to servers. Sometimes, it just make things harder than they need to be: if I need to recover a file backup from Amazon S3 because the local one was corrupted, and then restore the backup, my batch file necessarily needs CURL to finish its job before continuing, and a synchronous operation (with a specific timeout) makes perfect sense.

Keep REST for primary operations:

  • Getting a product,

  • Storing an invoice,

and use AMQP for the tasks where messaging actually makes sense:

  • Processing all invoices from September and notify the app when the report is ready to be shown (given that the operation takes usually from two to ten minutes),

    The benefit of AMQP here is its asynchronous aspect. An HTTP request pending for ten minutes have a good chance of causing a timeout and other issues.

  • Dispatching the information that the backups were corrupted to every one who may be interested, such as the support people, the database administrators, the monitoring team, the developers of the application which uses this database, etc.

    The benefit of AMQP here is, among others, the ability to add the subscribers without changing the application which tracks backups and triggers the alert when it finds a corrupted one.


¹ A public web service isn't necessarily used by users outside a company. In large or medium-size companies, your service is often used by other divisions of the same company and has the same requirements as the one which would be used by any third party: it should mistrust any call (the fact that some guy you never heard of who calls your service works in the same company you do doesn't mean he will not exploit its security issues), it should be documented properly (because the same Indian guy doesn't necessarily know your phone number and doesn't necessarily know English), etc.

","6605","","-1","","2019-03-04 00:09:46","2019-03-04 00:09:46","","","","1","","","","CC BY-SA 4.0" "198767","1","","","2013-05-20 15:03:39","","14","1684","

I am interested in knowing how to deal with a current software development process that has not been changed for years and will eventually lead to product and team failure. Yes, probably the easier way to solve this is changing jobs, but with this economy is easier said than done. However, if you have specific examples and have seen or been multiple times in same situation and think that the best solution to address these issues, is to leave the company, then please support your answer. The point is, this question really has an answer especially if multiple experts on the subject end up indicating that the best route to go is : route A.

I know tons of developers have been or are in similar situations. This is one of the main reasons why companies go from being # 1 in their market to becoming last or even off the market. Hopefully the answers in this post will help other developers facing similar obstacles. In a small or large development team this usually happens:

  • Some developers seem not to care and decide to go with the flow and prefer to leave code with lots of code smell the way it is and development process as is,
  • Others get tired of no change and resign and move to another company,
  • Others seem to be afraid to talk and prefer to stay quiet,
  • At times very few developers or just one tries to speak up for improvement of product, and tells the team how important is to follow best coding practices and benefits of doing so for clients, users and team. These type of developers usually decide to stay with the team due to reasons such as company offers benefits that very few software companies offer, or the product has lots of potential, etc.

The product in our team is just a fraction of where the company gets its revenue from as it has an umbrella of products (this company is not a software/hardware company; therefore, no constant patent litigation, at least for now, which creates job instability). What I have learned so far during these years from other developers’ experiences and my experience is that to really know a development team, it takes time, not days, nor weeks, but a few months. During the interview process if the team wants to hire you, or needs you; they make everything sound great, and they might tell you what you want to hear. However, the reality is different when you start working in that team and begin digging inside the code and moving towards the complete SDLC process. This is when as a developer, you start seeing the reality of the job you got into. This reality makes it difficult to want to move from one company to another because it is hard to know if the company you move to will be better or worse. Yes, you can read Glassdoor reviews etc.., but how many of those online reviews are real, and not from HR?

What would be the best way to tackle the issues outlined below considering that manager from the beginning has always resisted to change, and previous developers have been doing the same for years?

  • Lack of product Innovation for years: Product has lots of potential and brings good revenue to the company, but product looks like it was made 20 years ago. Some users have complained that product is not user friendly nor intuitive, and others have mentioned that are used to apps like Gmail and get frustrated when using the product because of not having similar features. The main issue here is that when you try as a developer to make changes to the product and start to move main elements of product around just a couple pixels away (to make it more user friendly, or intuitive), manager panics, and tells you to put it back where it was. If you try to add a feature that will benefit productivity for users, manager asks you to remove it because of ""users are used to doing the process the way it is etc.."" I think you got the point of the resistance to change, improvement and innovation (manager is not open to change, even when you as a developer provide strong arguments of benefits). Company has a few competitors in the field (the products of few of them are way more competitive), but somehow the company has maintained the current clients for years.

  • Lack of project management coordination: As a result of this, some projects are delivered late, with bugs and some clients complain (clients report the bugs too), or budget is used too fast prior to delivering project etc.. I've provide them a few project coordination tips and the ideas are now being used regularly to track progress of projects and tasks to be done.

  • Bad software Development Practices: Code smell is seen on most if not all files, no documentation, code redundancy, front end tier and back end mixed on same file, outdated development tools, no a real testing environment nor test tools (just copy and paste files from dev environment to production, and then test manually that things look good and release) . Most of the development tools I use for development and testing where unknown by team, as team only uses 2 IDEs for code development and source control is only available for development environment. Other developers have tried to use latest frameworks to improve current issues, but manager does not like it because of ""what if you leave, then who is going to maintain that code?, let’s leave it the way it is"" Some of those developers already left and moved to another companies.

In summary, I am sure similar situations happen to many developers in other companies but due to different circumstances, a developer might prefer to stay in the team than going to another company for reasons like (convenience of job, work flexibility, company benefits, or just because a better opportunity has not arrived). There is not perfect company that I know of, but how would you as a developer behave and approach all these issues in order to keep things positive and ultimately promote change for the improvement of product and betterment of the software development process (whether you have many years of development experience or just a few)? I know this is post is long, but I preferred to give extra details to increase chances of getting more useful feedback.

Thanks a lot for your all your feedback and time

","73323","","55400","","2014-03-31 15:50:53","2014-04-05 11:25:07","What to do as a Dev when for years their team has lacked product innovation, not used project mgmt methodologies, and kept bad Software Dev practices?","","5","8","7","2014-04-05 11:25:11","","CC BY-SA 3.0" "199187","2","","199177","2013-05-23 17:57:55","","1","","

Well, there's multiple versions of Subversion itself, so technically you should just go look at the changelog for the releases.

I think you'll find the differences are related to the times when the svn repository format changes, svn itself will happily (and silently) upgrade your working copy, and you can easily upgrade your server repo if you start using a new version of svn, the dumpfilter that you get with the server would just have been modified to work with the new formats.

Bugs, svndumpfilter is a tool that has certain limited uses, as a result it doesn't get modified much as people don;t tend to use it very often. Its also open source, so if someone really had a need to fix a particular bug in it, they would have (or paid someone else to do so). I don't know if that means ""lower priority"" or not, maybe ""less resourced"" would be a better term.

","22685","","","","","2013-05-23 17:57:55","","","","1","","","","CC BY-SA 3.0" "307439","2","","307177","2016-01-15 00:57:19","","10","","

Kind of echoing Basile here but with a slightly more negative tone. I'd say ""yes"" too, but be careful with that if you are allowing silent violations of preconditions. As a personal example, I worked in a C codebase that did this kind of stuff through an API:

int f(struct Foo* foo)
{
    if (!foo)
        return error;
    ...
    return success;
}

Perhaps even with a debug log in the middle. Yet it was a massive codebase, tens of millions of lines of code, over 30 years of development, and with pretty shoddy standards. The warnings in places like logs were filled to the brim with developers having developed the habit of ignoring the output.

I found, most unfortunately, when trying to change such error checks into obnoxiously-loud assertions that there were thousands of places in the system that were violating these preconditions and simply ignoring the errors (if there were any reported, some functions just returned without doing anything in those cases). The developers just worked around it and kept trying stuff and adding more code until their code ""worked"", blissfully unaware of their original mistake that they were passing nulls to a function that did not allow it.

As a result, I strongly recommend doing like this:

void MyClass::setMyString(QString input)
{
    assert(!input.isNull() && ""Error: setMyString() received NULL argument."");
    m_MyString = input;
    emit myStringChanged();
}

This will bring your application to a grinding halt if the assertion fails, showing you the precise source file and line number in which the assertion failed along with this message.

This is for cases where your function is not allowed to receive null (in your case, ""null"" here means ""empty string""). For ones that are allowed, you might simply do something else, but it wouldn't be reporting a bug (for that, use the loud and obnoxious assert which cannot be ignored).

An example of where you might use if might be like this:

bool do_something(int* out_count = 0)
{
    ...
    if (out_count)
       *out_count = ...; // write the result back.
    return true;
}

For ones that are not allowed to receive a null, use references when possible. However, you might still encounter ""nulls"" if you accept smart pointers, e.g., or if you expand your definition of ""null"" to include empty strings as another example. In those cases, assert your preconditions liberally if you can. assert has the advantage of not only bringing your application to a grinding halt (good thing), but also pointing out where it was brought to a grinding halt even outside of a debugging session (but still with a debug build, e.g.), and it's also an effective means of documentation.

qDebug is a little too silent usually. A team with good habits that never allows those outputs to be ignored might get away with keeping those to zero. But an assert will protect you even against the sloppiest teams by bringing the entire process to a grinding halt, making these programmer errors impossible to ignore. Another minor benefit of assert is that it can't slow down your release (production) builds, while if (...) qDebug(...) will.

For errors caused by external sources outside of your control (ex: failing to connect to a server that was down, failing to allocate a huge block of memory, trying to read from a file the user tried to load that was corrupt, etc), throw exceptions, but not for programmer bugs like accessing an array out of bounds or receiving a null in a place that never should receive null (unless you're working in a very mission-critical software where the software attempts to gracefully recover from even programmer bugs as opposed to making them as easy to detect and reproduce as possible). For these cases, assert.

","","user204677","47","","2017-12-20 16:15:55","2017-12-20 16:15:55","","","","4","","","","CC BY-SA 3.0" "307478","1","","","2016-01-15 10:40:43","","1","50","

In our application users can enter custom expressions to calculate certain things. For instance they can specify an invoice and define a number of lines for cost calculation.

Example for a course with $400 price $10 transaction fee and a number of free people (say because of some credit)

  • (?numberOfParticipants? - ?creditedtTickets?) * 400 + 10

We currently have custom code that parses and executes this. But we recently found a bug in it and need to spend some time on this.

We could improve the current hacky code, we could build a proper parser but I feel this is a very generic problem that lots of people have and there should be an off the shelf solution but I can't seem to find one.

Does anybody recognize this problem and how did you handle it?

","210938","","","","","2016-01-15 12:56:53","How to handle user created expressions in application","","1","4","","","","CC BY-SA 3.0" "199431","2","","199400","2013-05-25 14:48:14","","5","","

No. no. no. Absolutely not the sign of a pro-developer. A pro-developer is often the most vocal about which languages to use.

A Pro-Developer Has Failed Before

True knowledge of the strengths and weaknesses of a programming language comes from failure, and not text books or user manuals on how to write in that language.

I've worked on a wide range of projects that have failed. People were fired and businesses lost money. The programming language used played a role in what happen, and I've learned a lot about the risks involved when using that language.

Success Alone Is Not Enough

A developer can be certified by a credible organization for a programming language. He/she may read books extensively on a nightly basis. They may have several open-source projects they publish in that language.

Being successful in a programming language doesn't give you the wisdom to avoid problems. It makes you overconfident, and I've worked on many projects where the lead developer was overly successful. As a result, they wouldn't listen when trying to voice caution that the team was headed for problems.

What's The Sign Of A Pro-Developer?

Fear when presented with the opportunity to join a project that is using the wrong tool for the job. The developer who shows signs of ignorance or calm is an inexperienced fool.

Reverse Evaluation: I assume we're talking in terms of a job interview. How to tell if a developer knows his stuff. A pro-developer will turn the tables in an interview when he detects a mismatch between the problems at hand, and the programming language chosen to solve them. He'll want to know why the interview represents a company that has made what he thinks is an obvious mistake, and he knows changing programming languages is like trying to redirect a huge asteroid headed for New York.

Why Does This Happen So Often?

It's a multi-phase process. Here's the short form.

  1. A business starts a new project with inexperienced people because they're cheaper.
  2. Inexperienced people pick the programming language they are most comfortable with. Not what is right for the project (often this is done by people who won't be writing the code).
  3. 90% of those projects fail, but 10% survive.
  4. Those 10% suddenly need to fix up the project because now it's selling.
  5. Now there is a criticise so they decide to invest in an experienced developer.
  6. The experienced developer comes in and demands the programming language be changed (all the other pro-developers interviewed wouldn't take the job).

He is now thought to be worshipping a particular language and a holy war starts.

This scenario plays out over and over again in industry, and it's because of this that so many developers debate the pros/cons of each language. We all don't want to be that guy who has to fight a way to change languages. So these debates can become very heated at the start of a project.

No Turning Back Once You Start

Picking a programming language is like taking the first stab with the shovel into the dirt. You are either planting a tree, or digging your own grave.

","52871","","52871","","2013-05-25 16:35:19","2013-05-25 16:35:19","","","","2","","","","CC BY-SA 3.0" "408387","2","","408385","2020-04-04 18:11:18","","1","","

The purpose of aggregates is to model complex business relationships and to take business actions. The purpose of your repository is to perform ordinary CRUD operations on a data store.

Let's walk through a simple example: an Invoice. Invoices are not simple CRUD entities; they are aggregates of several entities: Customer, Products, Addresses, Payments. You operate on these entities individually using CRUD, but you operate on an invoice using methods that apply to an invoice.

Some potential methods for an invoice:

  • Print
  • Get Balance
  • Add Products
  • Change a Quantity

So your Invoice object becomes a mapping between methods that apply to an Invoice, and CRUD operations that apply to the respective repositories. It adds additional value beyond merely separating your persistence mechanism from the entities it persists.

I think when people start working through architectures, they forget to think about the reasons they're creating an architecture in the first place. You don't create an architecture to conform to a set of architectural rules; you create an architecture to make your software maintainable. The architecture is there to serve you, not the other way around.

Further Reading
Clean Architecture - Too many Use Case Classes

","1204","","1204","","2020-04-04 18:37:24","2020-04-04 18:37:24","","","","4","","","","CC BY-SA 4.0" "199786","2","","199768","2013-05-29 13:15:31","","3","","

I'm not sure, but I think you may be confusing story roles with authorization roles. The user story should not tell you how to implement something. The following user story is an example of user voice form using a role:

As a manager, I need to be able to add a new Foo Report to my Bar so that I can analyze all Foos and look for issues.

In my experience, the intention here is not to prescribe how to implement authorization, but instead to provide the developer with an idea of the type of person that is using this feature. You may also use a persona, such as ""Bob"", instead of the role to capture demographics and information about the type of user using the Foo Report. This gives you the context in how the feature will be used.

This should not mean that an authorization role of ""Manager"" should be created. You can implement your security however you like, so long as a user that is like ""Bob the manager"" can be given access to report on the Foos in their Bar.

Since my teams work with a lot of security roles on their projects, we use personas instead to avoid the possibility of confusion within the team.

(BTW: My apologies if this is not what you meant, perhaps you are using user roles in a way that I am not familiar with)

","75063","","","","","2013-05-29 13:15:31","","","","5","","","","CC BY-SA 3.0" "308034","2","","308027","2016-01-22 07:38:13","","5","","

Neither Format Is REST Standard.

It is your application's decision whether to use a hierarchical representation or a flat representation, or to offer both, just as it is your decision whether to use JSON or XML or Protocol Buffers or HTML, or all of them, for example.

Neither Representation Is Best Practice.

It depends on your applications' needs or your clients' needs.

There really is no one size fits all best practice for this. You will find a range of opinions. There are potentially great reasons to do it either way, depending on your requirements, tools, team proficiencies, etc.

You really just have to consider the pros and cons in your situation, and don't worry about it too much. Either choice will work with most tools.

REST Questions You Should Ask

  1. What are these resources? What will the resource identifiers look like? (URI's) A simple, efficient, appropriately cacheable URI model is at the heart or foundation of a good REST service. How will a client specify what media type, representation it wants?
  2. Will you need to retrieve more detail on the individual states (of India) and capitals? Is this a collection and are those states individual resources?
  3. If the application will need to GET or PUT, say ""Karnataka"" how will it know what URI to use? That is not included in your representation.
  4. If this is not a read only resource, how will use use the HTTP commands (GET, PUT, POST, etc.) to act on it? Using those commands properly is at the heart of REST over HTTP.
  5. After that, look up HATEOAS. Then have another coffee. Read it again, then post questions here if needed. Seriously HATEOAS seems to throw people for a loop.
  6. Now you are ready for the versioning wars. How will you evolve your api? Will you version it? (The REST thought police will get me for even asking that question.)

There's No Law That Says Good APIs Have To Use The REST Style
Fielding is happy to remind us of this, if we need a respected voice.

Really doing REST is a commitment to some intellectual effort, and some discipline in designing, testing and evolving your API. It's not needed for every API.

Sometimes you just have to build stuff that works and ship it. Refactor on the next iteration when you know more.

","54328","","","","","2016-01-22 07:38:13","","","","0","","","","CC BY-SA 3.0" "200328","2","","200320","2013-06-03 23:02:38","","12","","

This aligns with my saying: ""Programming is like sex. You can do it alone, but it is way less fun that way. And turns you nuts if you do it that way for too long.""

Yes, it's convenient to be quite your own boss and lone master of department. Also it's scary to leave the established shell. Not to mention to face the hostile world outside. And start from the bottom again. Parting is even harder if you are not kicked out, and/or you feel bad about abandoning the company that needs you and maybe got pretty much locked in.

I've been there. Worked some 12 years at a company as a 1-man army. Last years it was like many would call the Kanaan, worked mostly sitting home, just getting a note ""X client wants something check it out"", then few days later mailed that contract can be signed for X amount and Y deadline, then a month later sent another mail that invoice can be sent. And worked maybe 1 hour/day averaged -- for the full time money. And everyone was content boss and clients likewise.

but it grew on me, and despite having all the time, it was mostly just wasted.

I eventually posted an ultimatum to rearrange work so I can work in team, or I'm out. Boss probably thought it a bluff. Bottom line, I left for good. Thought will have job next day. Yeah, sure. ;-)

Faced a series of uber-WTF interviews and companies, but after a few months got a job. At a company that turned out bigtime sucker, but the local teams really rocked. At least when I joined, a year after that massive leaving started, obviously with the best poeple. Got about the same money but 8+ hours work in the office + commotion. In a project that had a ton of serious problems. And remote bosses guarded all the bugs.

but overall, I felt alive again, and happy to do relevant work. in a team that struggled for the same, and was happy that we finally started making progress against all the wind and hostile weather. In my count the switch was well worth it. The only thing to be sorry about that I didn't leave 4-5 years earlier.

The follow up is not really relevant (actually I left eventually, this time only 1 year later than optimal, made a home project, then joined another company that was promising, while our team made incredible progress the company turned south, and this time I finally left exactly on the zenit -- and after a calculated summer vacation landed where I work now with no plans to leave.) the point is life works out, never the way you expect, but for the better in the long run.

The bottom line is, if you no longer see the Sun, you better close the false hopes. It just will not get better. You can either force your way or look for actually fertile ground.

","92505","","","","","2013-06-03 23:02:38","","","","1","","","","CC BY-SA 3.0" "200405","2","","200329","2013-06-04 12:35:25","","5","","

My own personal preference for things that do back-end work is to find the end-user change. If the data you are processing eventually winds up in a report, show the before/after differences in the report.

I'm assuming the desire for the change came from a need. What was the problem that triggered the need to do the story? Your user story 'voice form' should indicate to you how you will be able to demo the problem by acting as the user in your story (i.e. As Joanne I need to view the report without users that are in Europe).

Additionally, you can look to your test team to help you in this case. There must have been some way that the test team was able to verify that the story was Done. How did they do this? Are you able to show that process within the demo?

","75063","","","","","2013-06-04 12:35:25","","","","0","","","","CC BY-SA 3.0" "309216","1","","","2016-02-04 02:43:09","","2","148","

We are a small team doing work on a LoB system that needs to connect to varied systems such as ERPs and CRMs to extract business processes like invoices, customer info, production orders and the such, for internal use of the application, and then return some specific result to the ERP. This is a real-time operation, not a one-off data extraction. All the business logic is properly shielded from the data layer.

The thing is, the data we extract from those systems is always the same, but the systems change a lot depending on the customer, and so do their sources/tables/fields from where the data is extracted. We have seen dozens of ERPs to date, from SAP/Dynamics to a lot of small ones, and every time we make a new installation there's code to be done so that our system knows where/how to extract the data it requires. As I said, most are small or in-house ERP systems so it ends up being a one-off library that we can't reuse on the next customer.

We want to improve this, ideally to a setup where instead of data code, there is configuration where the ERP structures are mapped to our structures for each installation, but just thinking about it seems like a lot of work.

We thought of ETL, but it isn't real-time. REST is also not an option since it still would require programming for each installation.

Is there an existing universal database/datasource mapper/translator to solve this kind of problem? Or standard pattern to use for developing this kind of thing?

","214035","","1130","","2016-03-10 20:25:34","2016-03-11 19:59:48","Standard System that connects to any datasource without much/any source-specific code?","","1","5","","","","CC BY-SA 3.0" "411608","2","","411585","2020-06-17 09:49:31","","13","","

Siloed

Framework and Infrastructure code is tricky. It is the dark and messy parts of the code base that hit actual walls, and the worst part is that often the solutions are counter-intuitive, as they have to work around the user (aka. programmer), language decisions, and platform idiosyncrasies.

What has happened is that you've become an expert, and become effectively siloed.

The worst part is this kind of code does not have an effective boundary between your code and the users code.

There are a few ways to deal with this situation.

Share the Pain

Nothing breeds knowledge quite like having to shovel the S#*T yourself.

Not everyone on the team will have the head for infrastructure/framework work, but there will be a few. The best way to start distributing knowledge is to start getting these developers to work on small areas of the infrastructure/framework.

Of course maintain oversight (it is critical after all), but you need to start getting the other developers thinking across the silo border.

Enforce the Border

If for one reason or another tearing down the silo cannot be done. The other strategy is to enforce better boundaries between your code and their code.

This can be done in a number of ways.

  • Push code into libraries, and while you do it stabilise and document the interface.
  • Institute Idioms. Idioms are easily remembered and can vastly simplify a complex implementation to a simple story from the users perspective.
  • Documentation. If your going to explain it more than once, best to capture it for future reference. Better yet if you can multimedia it with a recording of your voice/presenting it/graphs/pictures, or can link it to the source itself somehow.
  • Push back on explaining. Its not your job to describe the systems nuances to everyone who asks. If it were, you would be a technical writer not a software developer. Knowledge that was not hard won, does not stay learned. Don't waste your efforts.
  • Push implementation out of the framework up into user space. If they are seeking you out a lot to understand it, they are trying to alter it, ergo it is in flux and at the wrong shearing layer in the tech stack. Provide a plugin interface for it (such as a strategy pattern) and a default if others need it in the framework.
","319783","","319783","","2020-06-17 09:59:42","2020-06-17 09:59:42","","","","4","","","","CC BY-SA 4.0" "411873","1","","","2020-06-23 09:59:09","","1","97","

That is quite specific circumstances I've come across, and I somewhat struggle to find proper way how to approach this.

I'm given a class written in swift-language, which has a control property, like this (I generalized some names to avoid reference to actual product):

class FooViewController: UIViewController {
    ...
    var controlView: FooControlViewProtocol?
    ...     
}

FooControlViewProtocol is a quite simple protocol, which has both directive methods and event-based methods, this is kind of communication interface between a UIViewController instance and a so-called controlView which is essentially a UIView instance conforming to the protocol.

protocol FooControlViewProtocol {
    func doFirstMethod();
    func doSecondMethod();

    func eventDidStart(_ event: Event);
    func eventDidFinish(_ event: Event);
}

And now here is the complicated part: these parts are rather concrete, and they come from a framework another team is working on, and we should not change that at all cost, because some other parts of the app (and even other projects) also rely on this. However we need to customize it so it meets the needs of our project. Mostly we didn't even bother at all. That couple covered most of the requirements we had. However at some point we had to implement a custom scrolling behavior and inform this control view of current scrolling position, but for only one of dozens ViewControllers in the app. I ended up extending the protocol like this:

protocol ScrollableFooControlViewProtocol: FooControlViewProtocol {
    func scrollDidEnd(_ offset: CGFloat)
}

And then, in a FooViewController subclass the property is set like this:

class ScrollableFooViewController: FooViewController {

    var scrollableControlView: ScrollableFooControlViewProtocol?
    
    override var controlView: FooControlViewProtocol? {
        set {
            scrollableControlView = newValue as? ScrollableFooControlViewProtocol
        }
        get {
            scrollableControlView
        }
    }
}

The apparent problem is that if a new controlView instance does not conform to ScrollableFooControlViewProtocol, the application just ignore it silently and keeps working AND I find this mistake extremely likely to happen, because existing code use given abstractions heavily (FooViewController and FooControlViewProtocol). I cannot throw an exception from the property and it's somewhat too complicated to switch the property to a method (because of abstraction). One option I've put into consideration is force unwrapping newValue cast like this:

set {
    if newValue = newValue {
        scrollableControlView = newValue as! ScrollableFooControlViewProtocol
    } else {
        scrollableControlView = nil
    }
}

And the application can just crash in case of mistake, revealing the problem immediately. However this approach is far from perfect - the feature this protocol adds is not essential for application lifecycle and it can work seemingly flawless without it. So making it crash because of it is kind of overacting.

Integration test might be an answer here, where i can check these two components interaction, however i doubt this will pay off for only such a tiny case.

So the question is how to make this code safe? Probably I should have implemented it somehow differently in the first place.

","276742","","276742","","2020-06-23 13:01:35","2020-06-23 23:24:07","What is better way to track mistakes in error-prone part of a feature?","","1","8","","","","CC BY-SA 4.0" "412281","2","","412164","2020-07-02 13:49:17","","1","","

I think this subject suffers from conflated and co-opted terminology, which causes people to talk past each other. (I've written about this before).

For example, take the following:

Should I be writing only integration tests when there is dependency, and unit tests for pieces of code without any dependency?

I think most people would answer this question by saying that (ideally, modulo common sense, etc.):

"When there is no dependency, unit tests are sufficient and mocks aren't needed; when there is dependency, unit tests may need mocks and there should also be integration tests."

Let's call this answer A, and I'm going to assume that it's a relatively uncontroversial thing to say.

However, two people might both give answer A, but mean very different things when they say it!

When a "classicist" says answer A, they might mean the following (answer B):

"Functionality that is internal to the application (e.g. a calculation which performs no I/O) doesn't need integration tests, and its unit tests don't need mocks. Functionality with some external dependency (e.g. a separate application like an RDBMS, or a third-party Web service) should have integration tests, and if it has unit tests they may need the external interactions to be mocked."

When others ("mockists"?) say answer A, the might mean the following (answer C):

"A class which doesn't call methods of another class doesn't need integration tests, and its unit tests don't need mocks. Classes which call methods of other classes should mock those out during their unit tests, and they should probably have integration tests too."

These testing strategies are objectively very different, but they both correspond to answer A. This is due to the different meanings they are using for words. We can caricature someone who says answer A, but means answer B, as saying the following:

  • A "dependency" is a different application, Web service, etc. Possibly maintained by a third-party. Unchangeable, at least within the scope of our project. For example, our application might have MySQL as a dependency.
  • A "unit" is a piece of functionality which makes some sort of sense on its own. For example "adding a contact" may be a unit of functionality.
  • A "unit test" checks some aspect of a unit of functionality. For example, "if we add a contact with email address X, looking up that contact's email address should give back X".
  • An "interface" is the protocol our application should follow to interact with a dependency, or how our application should behave when used as a dependency by something else. For example, SQL with a certain schema when talking to a database; JSON with a certain schema, sent over HTTP, when talking to a ReST API.
  • An "integration test" checks that the interface our application is using with a dependency will actually have the desired effect. For example "There will always be exactly one matching row after running an UPSERT query".
  • A "mock" is a simplified, in-memory alternative to a dependency. For example, MockRedisConnection may follow the same interface as RedisConnection, but just contains a HashMap. Mocks can sometimes be useful, e.g. if some of our unit tests are annoyingly slow, or if our monthly bill from a third-party Web service is too high due to all of the calls made by our tests.

We can caricature someone who says answer A, but means answer C, as saying the following:

  • A "dependency" is a different class to the one we're looking at. For example, if we're looking at the "Invoice" class, then the "Product" class might be a dependency.
  • A "unit" is a chunk of code, usually a method or class. For example "User::addContact" may be a unit.
  • A "unit test" checks only the code inside a single unit (e.g. one class). For example "Calling User::addContact with a contact with email address X will ask to DBConnection to insert a contacts row containing email address X".
  • An "interface" is like a class but only has the method names and types; the implementations are provided by each class extending that interface.
  • An "integration test" checks that code involving multiple classes gives the correct result. For example "Adding Discounts to a ShoppingCart affects the Invoice produced by the Checkout".
  • A "mock" is an object which records the method calls made on it, so we can check what the unit of code we're testing tried to do in a unit test. They are essential if we want to isolate the unit under test from every other class.

These are very different meanings, but the relationships between B's meanings and between C's meanings are similar, which is why both groups of people seem to agree with each other about answer A (e.g. their definitions of "dependency" and "integration test" differ, but both have the relationship "dependencies should have integration tests").

For the record, I would personally count myself as what you call a "classicist" (although I've not come across that term before); hence why the above caricatures are clearly biased!

In any case, I think this problem of conflated meanings needs to be addressed before we can have constructive debates about the merits of one approach versus another. Unfortunately every time someone tries to introduce some new, more specialised vocabulary to avoid the existing conflations, those terms start getting mis-used until they're just as conflated as before.

For example, "Thought Leader X" might want to talk about physical humans clicking on a UI or typing in a CLI, so they say "it's important to describe how users can interact with the system; we'll call these 'behaviours'". Their terminology spreads around, and soon enough "Though Leader Y" (either through misunderstanding, or thinking they're improving the situation), will say something like "I agree with X, that when we design a system like the WidgetFactory class, we should use behaviours to describe how it interacts with its users, like the ValidationFactory class". This co-opted usage spreads around, obscuring the original meaning. Those reading old books and blog posts from X may get confused about the original message, and start applying their advice to the newer meanings (after all, this is a highly regarded book by that influential luminary X!).

We've now reached the situation where "module" means class, "entity" means class, "unit" means class, "collaborator" means class, "dependency" means class, "user" means class, "consumer" means class, "client" means class, "system under test" means class, "service" means class. Where "boundary" means "class boundary", "external" means "class boundary", "interface" means "class boundary", "protocol" means "class boundary". Where "behaviour" means "method call", where "functionality" means "method call", where "message send" means "method call".


Hopefully that gives some context to the following answer, for your specific question:

However, how would I go about writing unit tests for a piece of code that uses one or more dependencies? For instance, if I am testing a UserService class that needs UserRepository (talks to the database) and UserValidator (validates the user), then the only way would be... to stub them?

Otherwise, if I use a real UserRepository and UserValidator, wouldn't that be an integration test and also defeat the purpose of testing only the behavior of UserService?

A 'classicist' like me would say that UserService, UserRepository and UserValidator are not dependencies, they're part of your project. The database is a dependency.

Your unit tests should check the functionality of your application/library, whatever that entails. Anything else would mean your test suite is lying to you; for example, mocking out calls to the DB could make your test suite lie about the application working, when in fact there happens to be a DB outage right now.

Some lies are more acceptable than others (e.g. mocking the business logic is worse than mocking the DB).

Some lies are more beneficial than others (e.g. mocking the DB means we don't need to clean up test data).

Some lies require more effort to pull-off than others (e.g. using a library to mock a config file is easier than manually creating bespoke mocks for a whole bunch of intricately-related classes).

There is no universal right answer here; these are tradeoffs that depend on the application. For example, if your tests are running on a machine that may not have a DB or a reliable network connection (e.g. a developer's laptop), and where left over cruft will accumulate, and where there's an off-the-shelf library that makes DB mocking easy, then maybe it's a good idea to mock the DB calls. On the other hand, if the tests are running in some provisioned environment (e.g. a container, or cloud service, etc.) which gets immediately discarded, and which it's trivial to add a DB to, then maybe it's better to just set 'DB=true' in the provisioner and not do any mocking.

The point of integration tests, to a classicist, is to perform experiments that test the theories we've used to write our application. For example, we might assume that "if I say X to the DB, the result will be Y", and our application relies on this assumption in the way it uses the DB:

  • If our tests are run with a real DB, this assumption will be tested implicitly: if our test suite passes, then our assumption is either correct or irrelevant. If our assumption is wrong in a relevant way, then our tests will fail. There is no need to check this with separate integration tests (although we might want to do it anyway).

  • If we're mocking things in our tests, then our assumptions will always be true for those mocks, since they're created according to our assumptions (that's how we think DBs work!). In this case, if the unit tests pass it doesn't tell us if our assumptions are correct (only that they're self-consistent). We do need separate integration tests in this case, to check whether the real DB actually works in the way we think it does.

","112115","","112115","","2020-07-07 06:12:23","2020-07-07 06:12:23","","","","0","","","","CC BY-SA 4.0" "203996","2","","203993","2013-07-06 09:45:41","","5","","

how much can the Product Owner really shake up the Product Backlog?

As much as he likes.
It is the ""Product Owners"" job to represent the user and get them what they want.

Is it really fair game to ignore what's already in the backlog and keep on shoving new stories at the top?

Sure is.
If the old stories are no longer relevant to the user why build them. You only want to build what is important to the user. But be careful. A story is supposed to be a user action. You should be able to describe a story in terms of an action a user does (no technical details).

I've tried to argue that this is hogwash

It is the job of the ""Scrum Master"" to protect the developers from flippant change.
It is the job of the ""Scrum Master"" to pick what makes it into the sprint and protect the developers from the ""Product Owner"" and make sure the technical part of the project work. This means he should be picking items to form the architecture first. He should try and deliver some user stories but the ones that make sense (trying to respect the Product Owners priority but he does not need to stick to it 100% if that does not make technological sense).

Note: As pointed out below my definition of ""Scrum Master"" here may not be fully standard. I was simply using it to point out that it is not the Job of the PO to pick tasks for the scrum backlog as this is a technical in nature (and the PO being a chicken has no voice in the sprint planning). It is the job of the team to pick what goes in the sprint backlog as they are technically inclined and understand how components will interact. They will be heavily influenced by the priority in the product backlog but that is not there only consideration they use when making technical decisions. My use of the term ""Scrum Master"" is heavily influenced by my experience where the ""Scrum Master"" is usually the technical or team lead (usually a senior dev with the clout/fortitude to stand up to management who tend to be pushy when they can get away with it) this of course is not required by scrum.

-

but I often hear counter-arguments such as ""adapting to frequently changing business needs,"" ""we're moving so fast - we're super agile"" and I don't know what to say to those arguments,

These are good arguments. But make sure

  1. You are talking about true stories
  2. Your architecture takes priority over user interface

although I feel it is not how Scrum was meant to be.

Scrum is meant to allow you to react quickly.
But during a sprint there should be no change to the sprint backlog. And make sure your sprints are of reasonable length: 3 to 4 weeks. That should give you a chance to finish a reasonable small story or build the foundation for a large story.

","12917","Loki Astari","12917","","2013-07-07 22:14:50","2013-07-07 22:14:50","","","","38","","","","CC BY-SA 3.0" "204009","2","","203993","2013-07-07 22:46:50","","6","","

In traditional Scrum, the Product Owner is the person who owns the Product Backlog. This person is responsible for adding or removing stories as appropriate and ensuring that the list is prioritized based on the needs of the stakeholders. However, the Development Team is responsible for ensuring that the Product Backlog Items that are added are understood and good enough (complete, clear, testable, feasible, etc) to be estimated and delivered when they come up in a future sprint.

From your perspective on the development team, you shouldn't be relying on the product backlog. All of your actions - your requirements elaboration, architecture and design, implementation, and testing should be fully based on the Sprint Backlog. This is what gets fixed for a particular Sprint and changes to the Sprint Backlog should be minimized.

Remember that the Product Owner is the voice of the stakeholder for the development team. They are the ones who understand what the business need for the system is and what capabilities or features would have the most impact.

Warning signs to look out for would be the Product Owner attempting to change the Sprint Backlog or having a Product Owner that is out of touch with the needs of the business or user base.

","4","","4","","2017-11-16 11:43:19","2017-11-16 11:43:19","","","","2","","","","CC BY-SA 3.0" "412700","2","","412698","2020-07-13 23:02:36","","4","","

There are different things at play here.

First, Scrum is not the same as SAFe and neither are the same as Agile. Agile Software Development is a set of values and principles. Scrum is a lightweight process framework that is defined in the Scrum Guide and contains a number of roles, events, and artifacts. Scaled Agile Framework (SAFe) is an enterprise-level framework that may or may not be helpful in helping an enterprise embrace agility.

What this means is that a practice that helps promote agility may or may not be in line with the rules of Scrum or SAFe.

Is sending a status email at the end of the day agile? I'm not sure. Maybe, maybe not. One of the principles of Agile Software Development is that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation". If there's no face-to-face (or, in today's world, 20 years after the Manifesto for Agile Software Development was written, high fidelity voice and video communication), I'm not sure you can be Agile. However, there's insufficient information to outright say that you aren't Agile.

Is sending a status email at the end of the day consistent with Scrum? Absolutely not. Scrum is defined in the Scrum Guide and is immutable. That means if you are not following the rules that are specified, you may be doing something that works for you, but the result is not Scrum. One of the key Scrum events is a daily planning and coordination meeting called the Daily Scrum. If the Development Team does not get together for up to 15 minutes for the purposes of planning their day, it's not Scrum.

Is sending a status email once a day consistent with SAFe? Again, I'd say no. At the team level, SAFe calls for the Daily Stand-Up (DSU). It's a gathering of the full team at the same time and the same place every day. Since you're not doing that, I'd be hesitant to call what you're doing SAFe.

If your team is so distributed, I'd question how effective they are as a team. They may be more like individuals working on a common project. In such a case, perhaps Scrum and SAFe aren't appropriate for your needs. Most frameworks are built around either co-located teams or at least teams that have not-insignificant overlap in their working hours to support frequent, real-time communication.

","4","","","","","2020-07-13 23:02:36","","","","0","","","","CC BY-SA 4.0" "204196","2","","204178","2013-07-09 18:15:23","","1","","

company owner: ""all the devs here know how to do TDD, they'll do it when required"".

It is pretty obvious that the company owner is running out of arguments and tries end up the discussion.

But honestly, as long as the company owner is not also the lead developer, you are talking to the wrong person. TDD is something you have to convince your dev colleagues of, and if they are willing to adopt it, then the company owner most probably won't resist. If your boss is a good one, he will listen to what the whole team says, and you have just one voice among many. Accept that if you want to be a good team-worker.

","9113","","9113","","2013-07-11 16:02:49","2013-07-11 16:02:49","","","","0","","","","CC BY-SA 3.0" "414681","2","","414672","2020-08-10 14:03:35","","3","","

This is an area where you don't have canned patterns. So let's look at what your stated needs are:

  • You need to know if am insert was successful
  • The update could come from any of 3 different sources

Ideally, we would need a means of generating a unique key that is derived from the data you are receiving in some way.

If we only had one source of information, we would be able to use the message Id to identify if the record was inserted or not. Another option would be to codify the source and the message id together. Example: source is codified as 1,2 or 3, so you append the message Id to the 1, 2, or 3 prefix. It can work, assuming every message Id is unique. That may or may not be true.

Another option is to have a creation date, trace ID and trace source in the table you are writing to. This allows you to query before writing. In this case I would have a transaction:

  • Query to see if there is a record written since the message was authored that came from the same source and has the same message id.
    • WHERE creationDate > ? AND messageSource = ? AND messageId = ? where the ? marks parameters for the query.
  • If nothing is found, write the update (including the source and trace id)--otherwise it has already been written
  • Complete the transaction

On the topic of connection drops

If you are having a connection dropped intermittently, but often enough where this is a real problem, then something is wrong. It could be that your configuration is set for tolerances that are unreasonable. It could also be that you need to change your approach. For example, a timeouts would be a symptom where you need to step back and take a stock of the larger picture.

  • Don't request a connection until you are ready to do something with the database
  • If it's going to be a while until you do the next thing, release the connection when you are done
  • Determine if the timeout is network related, record related, or due to some other resource contention

When you are getting timeouts due to a network something is very wrong. I was on a program where actions that were taking milliseconds suddenly started taking minutes. It turned out that the infrastructure team moved the DNS server in a way where our servers were not updated. In self defense we put entries in our HOSTS file so our servers could always find the other servers we deployed to, as well as fixing the IP address of the DNS server.

Sometimes it's not the network layer, and your database is suffering from severe record locking problems. This can happen if your database silently promotes record locking to page locking, or worse, page locking (here's looking at you MS SQL Server). Your options here are to offload queries from your database or ensure that queries are for snapshots of data (i.e. does not have to wait for transactions to resolve). In this case, make use of Redis when reading individual records, and ElasticSearch (or equivalent) when performing complex queries. The idea is that the database serves as gold master and everything else is a slave to that data. The more you can relieve contention from the database, the faster your system will feel.

Finally, there can be other types of resource contention. Examples include disk access during a security update, network bandwidth due to very chatty communications, etc.

It's always good to have a solution to ensure a write once semantic, but when you are constantly dealing with something that should not be a problem, sometimes you need to take a look at what's causing the issue. That's a pain, but the general process is the same:

  • Look for correlations (i.e. events happening at the same time)
  • Go through a process of elimination until you find the cause
","6509","","","","","2020-08-10 14:03:35","","","","3","","","","CC BY-SA 4.0" "206328","2","","206321","2013-07-28 18:45:29","","55","","
  1. You can quit. Not the most constructive thing to do, but sometimes it's the only option. If you do, don't sit around and moan about how you had to give it up, take that energy and put it straight into something else - 'move on' in other words.

  2. You can fork it. There's no reason why you have to work with anyone. Fork, improve the code and let the others continue to have a little ego-fest of their own. Your new project will simply compete with the old and its up to you whether you make a success of it, or the old one beats you in terms of users and features.

  3. You can engage with the rest of the development team on the project to voice your concerns. Don't make it personal, but make out that you're unhappy with code churn, or lack of established quality processes, or unhappy that the new decisions are just pushed out without agreement from everyone. You'll either be told that nothing's wrong enough to change, or you'll get a few others agreeing with you that the team needs to fix things up. That might end up with the disruptive guy losing his commit access. Maybe you'll all agree that some of the changes are not improvements and the project needs to be reverted. (This latter option is the most likely outcome, unless it turns into a massive argument of entrenched opinions.)

It can be difficult when someone comes along and changes the safe and comfy routines you've become used to, but it could be said that having someone come along and shake up the old, cozy practices are good things in themselves.

","22685","","111239","","2014-01-04 07:01:33","2014-01-04 07:01:33","","","","5","","","","CC BY-SA 3.0" "314513","2","","314504","2016-04-01 21:17:19","","4","","

The wording in that contract is not unusual.

In practice it means that they are asking you to track how many licences you own, who each license is assigned to, and to validate that people are not ""sharing"" licenses or downloading ""cracked"" versions off the internet.

If the vendor wanted to do a license audit, it would likely involve first establishing from the paper trail how many licenses you actually own, and then requesting you to identify on which machines are associated with each license.

They would then check a sample of machines without licenses, and as long as no unlicensed installations were found, they would thank you for your time and say good bye.

If you are unable to identify how many licenses you own, or which machines are licensed, then they would be likely to investigate every machine, and then tell you how many licenses you are using, and then send you a large invoice for breach of contract and for any additional licences you were found to be using.

This costs the vendor both money to pay for the audit, and also in customer good will if customers feel they are being unfairly targeted, so software companies generally only perform an audit if they have good reason to suspect widespread license abuse.

","8035","","","","","2016-04-01 21:17:19","","","","0","","","","CC BY-SA 3.0" "207853","2","","69519","2013-08-10 17:17:47","","5","","

Initial note: I'm taking issue with one assumption in the question, and draw my specific conclusions (at the end of this post) from that. Because this probably doesn't make for a complete, encompassing answer, I'm marking this as CW.

Employees.CreateNew().WithFirstName(""Peter"")…

Could easily be written with initializers like:

Employees.Add(new Employee() { FirstName = ""Peter"", … });

In my eyes, these two versions ought to mean and do different things.

  • Unlike the non-fluent version, the fluent version hides the fact that the new Employee is also Added to the collection Employees — it only suggests that a new object is Created.

  • The meaning of ….WithX(…) is ambiguous, especially for people coming from F#, which has a with keyword for object expressions: They might interpret obj.WithX(x) as a new object being derived from obj that is identical to obj except for its X property, whose value is x. On the other hand, with the second version, it's clear that no derived objects are created, and that all properties are specified for the original object.

….WithManager().With…
  • This ….With… has yet another meaning: switching the ""focus"" of the property initialization to a different object. The fact that your fluent API has two different meanings for With is making it difficult to correctly interpret what is happening… which is perhaps why you used indentation in your example to demonstrate the intended meaning of that code. It would be clearer like this:

    (employee).WithManager(Managers.CreateNew().WithFirstName(""Bill"").…)
    //                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    //                     value of the `Manager` property appears inside the parentheses,
    //                     like with `WithName`, where the `Name` is also inside the parentheses 
    

Conclusions: ""Hiding"" a simple-enough language feature, new T { X = x }, with a fluent API (Ts.CreateNew().WithX(x)) can obviously be done, but:

  1. Care must be taken that readers of the resulting fluent code still understand what exactly it does. That is, the fluent API should be transparent in meaning, and unambiguous. Designing such an API may be more work than is anticipated (it might have to be tested for ease-of-use and acceptance), and/or…

  2. designing it might be more work than is necessary: In this example, the fluent API adds very little ""user comfort"" over the underlying API (a language feature). One could say that a fluent API should make the underlying API / language feature ""easier to use""; that is, it should save the programmer a considerable amount of effort. If it's just another way of writing the same thing, it's probably not worth it, because it doesn't make the programmer's life easier, but only makes the designer's work harder (see conclusion #1 right above).

  3. Both points above silently assume that the fluent API is a layer over an existing API or language feature. This assumption may be another good guideline: A fluent API can be an extra way of doing something, not the only way. That is, it might be a good idea to offer a fluent API as an ""opt-in"" choice.

","4906","","4906","","2013-08-10 17:25:50","2013-08-10 17:25:50","","","","1","","","2013-08-10 17:19:55","CC BY-SA 3.0" "208265","2","","208238","2013-08-14 15:00:10","","25","","

From the GPL FAQ (but the advice is applicable to all licenses):

Why does the GPL require including a copy of the GPL with every copy of the program?

Including a copy of the license with the work is vital so that everyone who gets a copy of the program can know what his rights are.

It might be tempting to include a URL that refers to the license, instead of the license itself. But you cannot be sure that the URL will still be valid, five years or ten years from now. Twenty years from now, URLs as we know them today may no longer exist.

The only way to make sure that people who have copies of the program will continue to be able to see the license, despite all the changes that will happen in the network, is to include a copy of the license in the program.

(emphasis mine)

The moment the site hosting you license goes down or changes its URL paths, people who have copies of your software can no longer verify what rights they may safely exercise. Suppose even that you could somehow guarantee that that exact URL will be forever online: the ability for users to verify that their use of your software is legal still depends upon the ability to connect to that particular URL. While this requirement may not onerous in your particular city/country/planet, it may be onerous elsewhere. You should not impose this requirement, especially when the workaround (including the full license text) is trivial.

You might answer this complaint by saying, "So what? If the URL does go down or is not accessible, an unambiguous descriptor like 'GNU GPL v3' should be sufficient. Full-text copies of the GPL are plentiful; users can look up the license themselves." A few problems immediately spring to mind:

  1. This doesn't generalize to license identifiers that are less clear (the phrase "BSD license" comes to mind).

  2. This doesn't generalize well to licenses that are less common or have been customized ("GPL with linking exceptions" comes to mind: which linking exceptions?). How common does a license need to be before it's reasonable to expect a user to find it reliably by name?

  3. This still requires users to have an Internet connection, which may not be the case, even if they had a connection at the time they got the software. (And they may not have had Internet access when they got the software: "the CD age" has not yet ended in many parts of the world. As an additional case, consider national populations that have widespread Internet access but censor large parts of it.) A consequence of freely-redistributable software is that a recipient may not receive a copy of your software directly from you or through a distribution channel you originally anticipated.

One final argument against license links is noted by MichaelT's comment below: it could allow you to dynamically, retroactively change the license. This could be done intentionally, but it could also be done by accident, if you changed the license between versions of the software, but used the same license link for both versions, thereby clobbering your old license out of existence. Such a switch would add difficulty for people who need to prove they got their older copy under a different license than the current version.

So why do I have to keep the license in the project root?

I'm not a lawyer, but I've never seen any compelling argument that you do need to keep licenses in the project root. Even the GPL, which specifies that the license must accompany each copy of the work, is silent on how it must accompany the work. (This may be because the GPL could be applied in non-software contexts, where the notion of "root directory" is not meaningful.)

Keeping the license in the root directory is probably a good idea because it maximizes the likelihood the user will see it, and thereby minimizes both user frustration and the likelihood of complaints against you for trying to hide the license in some obscure directory. If you have many licenses, it might make more sense to place them all in their own folder, and include an obvious project README that contains file paths to find the license for each component.

Placing your license in the directory root is a helpful practice also because it can disambiguate the licenses of modules that are licensed differently that the work as a whole. Suppose my project FooProj uses the stand-alone module BarMod. FooProj might be GPL-licensed, while the standalone module might be MIT-licensed. When I first open FooProj, I see a copy of the GPL in the root and understand that work as a whole is GPL-licensed. When I descend into the folder for BarMod, I see a new license file there, and I understand that the contents of this folder are MIT-licensed. Of course, this is only a helpful aid; you should always indicate the licensing of your modules explicitly in a README, NOTICE, or similar file.

In sum, using the file root is a matter of convenience and clarity. I have not seen any legally binding open-source license text that requires it, nor do I know of any reason why it would be legally required. Your license should be reasonably easy for the recipient to discover; including the license in the project root is sufficient, but not necessary, to satisfy this criterion.

","51295","","-1","","2020-06-16 10:01:49","2014-10-03 18:38:40","","","","3","","","","CC BY-SA 3.0" "208574","2","","19244","2013-08-17 07:18:11","","9","","

Coming in late to the game, but I provide this for later developers who might stumble across this question.

I would strongly advise against AOP if your application depends on it to operate correctly. Aspects work like this:

  • Advice (additional behavior) is applied to
  • Join points (places where the extra code can be attached, such a method start or end, or when a given event triggers)
  • ... where pointcut (a pattern that detects whether a given join point matches) patterns match

For anyone who's been doing computers for a long time, the fact that patterns are used might be something to look at closely. So to here's an example of a pointcut that matches any method named set regardless of the arguments:

call(* set(..))

So that's a fairly sweeping pointcut and it should be clear that handling this with care is advised (no pun intended) because you're applying advice to many things.

Or heck, let's apply advice to everything, regardless of name or signature!

execution(* *(..))

So clearly we should be careful because there's a lot of power here, but this is not an argument against aspects — it's an argument for caution because there's a lot of power here and pattern matching can easily go awry (just hit your favorite search engine for aop bugs and have fun).

So here's what looks like a relatively safe pointcut:

pointcut setter(): target(Point) &&
                   ( call(void setX(int)) ||
                     call(void setY(int)) );

That explicitly provides advice if methods named setX or setY on a Point object are found. The methods can only receive ints and they must be void. Looks pretty safe, right? Well, that's safe if those methods exist and you've applied the correct advice. If not, too bad; it silently fails.

To give an example, a friend was trying to debug a Java application where everyone once in a great while, it would return incorrect data. It was an infrequent failure and didn't appear to be correlated with any particular event or data in particular. It was a threading bug, something that is notoriously difficult to test or detect. As it turns out, they were using aspects to lock methods and make them ""thread safe"", but a programmer renamed a method and a pointcut failed to match it, thus causing a silent breakage of the application.

Thus, I tell people that if they must use AOP, to treat aspects like exceptions: in a well-designed system and if nothing goes wrong, they can be removed and the software still functions correctly. However, if the functionality of the program depends on AOP, you introduce a fragility to your program that is unwarranted.

Thus, logging, debugging and tracing are great examples of behaviors that are perfect for aspects, but security? Nope. Thread safety? Nope.

For a robust alternative to AOP, see traits. Rather than being bolted onto the language, they are integrated into it directly, don't need a ""trait aware"" IDE (though it can help) and have compile-time failures if the methods you require are not present. Traits do a much cleaner job of handling separation of concerns because the problem was better defined from the start. I use them extensively and they're fantastic.

","99796","","","","","2013-08-17 07:18:11","","","","2","","","","CC BY-SA 3.0" "421126","2","","379156","2021-01-16 12:31:06","","7","","

It's not that simple, and of course "it depends":

I can't remember where, but I think I remember reading Roger Johansson write something along the lines of "Don't queue messages using an enterprise queue if your actors will be the only consumers".

what this is referring to, is the super high cost of serialising a message via an enterprise bus (not in memory), that in order to make "durable" has to first write to persistent storage, typically disk, and get confirmation of successful write before continuing. (plus other costs, serialisation, deserialisation, network latency, packet re-combination, etc etc) This can be up to 100 000 times slower (random really huge number, but you get my point) than class A calling method of class B, or class A using Mediator to class B. In process method calls are guaranteed to be ACID, and while they're not durable, if you 're 100 000 faster, then replaying the entire message chain if failed is a responsiblity of the parent, typically with it's own try catch. so system crashes, so what (I laugh in the face of system failures mwaaa! "mini me" strokes cat) , when it starts it goes, oh, so-and-so saga didn't complete, replay it, and all it's child messages get replayed.

The key lesson here is make the entry point message to the micro service "durable" and for all internal messages make them fast as heck, and idempotent. (really rough paraphrasing here)

TLDR; entry point to micro-service should be your service processing a message picked up from some durable message bus. Internally, consider using patterns like an in memory bus, or an actor framework, eg libraries like Akka.net, TPL DataFlow etc and as far as possible, designs that don't require durability anywhere other than at the incoming and outgoing edges of the main message processing pipeline.

  • In memory message bus typically does not "queue" work, i.e. Gregory Young's FakeBus linked to above just starts the work in a background thread, so you have to worry about concurrency.
  • Concurrency frameworks, like Akka.net and TPL DataFlow give you high and low level tools respectively for managing work (thread) concurrency. Eliminating a large portion of software engineering problems. Akka.net focuses on higher abstractions around the actor pattern, wheras TPL DataFlow provides low level thread "Pipeline" concurrency management. I highly recommend you complete introductory training on both, before you choose either tool.

If you work on your own, are highly detailed, or writing a library, then possibly consider TPD DataFlow. If you're working in a large team, and building mainly business applications, consider Akka.net or similar actor frameworks.

Message Bus

  • use to send the big Domain Commands or events. e.g. InvoiceCreated, PaymentMade
  • use at entry and exit of micro services

Mediator Pattern

  • as described further up, use to simplify code. In my experience it makes code less brittle, and easier to refactor and easier to read (scan) quickly.
  • typically it makes DI a lot cleaner, without having to have such huge complex dependencies injected.
  • does not make messages durable, does not do anything for concurrency

In memory Bus

  • high speed way of decoupling components
  • Use for handling private, internal messages between components of the service.
  • as the service grows, a service might become a collaboration of micro services. only use an in memory bus to communicate between microservices when there is a parent that can supervise any long running sagas. Typically the durability of the entire saga might be managed by the parent as a result of responding to a durable "message bus" message.

Finally, since I haven't yet answered the OP's question -> "Is it appropriate to use an InMemory Bus for a single Microservice (for domain events) and a durable Message bus for integration events (between Microsservices)?"

I'd say... yes, spot on.

Hope this helps?

","383226","","383226","","2021-01-16 12:54:52","2021-01-16 12:54:52","","","","0","","","","CC BY-SA 4.0" "321672","2","","321650","2016-06-08 17:48:47","","3","","

Some observations:

...when I reject his stories

I don't know your work culture or process, but to me rejecting a story is a severe step. If I were the dev, I would also generate push back on that as it is a recorded action that reflects badly on me and on the team.

He says its unfair since I don't specify the edge cases.

It's unfair of him to expect you to know all the edge cases. But at the same time, it's unfair for you to expect that of him. Every change has risk, and as issues are discovered y'all need to work together as a team to address them.

I don't know his design for the story until after he implements it

You should not have to know the design. It can be helpful to know the design in order to make initial educated guesses as to which stories are easier or harder for backlog management. But avoid trapping the developer into your design when you write stories. It sucks all the fun out of it when you are simply a voice-activated keyboard for the PO.


It sounds like you guys should work on process improvement and do some team building. Some things I might suggest for process:

  • Suggest that the dev include time in the story to the cover fixing discovered edge cases. Heck, make it part of each user story. This is easily defensible via the goal of 0 new bugs introduced. The problem is that the dev is not planning for it currently. And he's out of time when you discover issues. It's going to take time either way, so put it in the story where it is visible during planning.
  • After your testing (and thank you for testing by the way!), send the dev a list of discovered issues. The fixing of those issues will go against the ""fixing edge cases"" condition of satisfaction.
  • If anything remains unfixed or is discovered too late, decide whether the story needs to be pushed based on whether the use case can be fulfilled. Known issues and work-arounds happen. Disclosed them in release notes and create new stories to fix them.
  • If there is a particular rough spot in the process that generates pushback, then change your process! After all, process improvement is part of Scrum. For instance, if your dev gets upset when you reject the story, then suggest to the team a change in process so that rejection doesn't trigger fixes. Do the testing and fixes before Done and Rejected.
  • Work with the team and what they have produced and make the best use of it you can. They don't do perfect work and neither do you. So plan for that. My teams have usually been devops, so we have an Unplanned Support user story each sprint for emergent issues... planning for the un-plan-able.
","44202","","","","","2016-06-08 17:48:47","","","","1","","","","CC BY-SA 3.0" "422408","2","","422392","2021-02-18 07:42:35","","3","","

Stewardship

A steward is responsible for managing a common resource, be that: land, a rare resource, or your git repository.

They are responsible for managing the conflicting forces that afflict any shared/critical area, ensuring:

  • that posterity has its voice (the future people),
  • that chores are undertaken (like removing garbage),
  • that maintenance is performed (because broken stuff isn't useful),
  • that squabbles and serious issues are adjudicated and decided on (because there is nothing like politics aka decision hell to bog a project down)
  • that massive infrastructure is built (no one individual can build an airport, but everyone benefits from its existence)

In fact every git repo should have a steward.

Who is the Steward?

Most repositories implicitly have a steward - the team that works on that code base. And for small teams this works well enough. People work well in small teams/families ~12 individuals. There is usually enough bandwidth available to sort most stuff out then and there.

But what if you have a large team? < 250 individuals (which is the magic number for most villages without mayors). Generally there is someone: an architect, a senior dev, a manager. That someone is respected, and considered fair, and capable by the other team members. Sufficiently enough that they will be approached for guidance on big changes, and be asked to weigh in on disputes between two or more devs.

What if no one has stepped forth? Then you need to officially anoite someone as the steward of that repository.

But what if you have many teams working in the same repository? Time for a tough decision:

  • Get each team to forward a steward to partake in a steward council. That council is responsible for that repo.
  • Create a new team to be the exclusive stewards of that code base, and they are responsible for veting contributions.
  • Break up that repository. Split it into smaller components and give each team their relevant component. Have a single repo serving as the platform to which the other components plugin and have a team responsible for that.

Chances are you will need to deploy all three strategies as you move toward whatever shape of code base/architecture is desired.

","319783","","","","","2021-02-18 07:42:35","","","","1","","","","CC BY-SA 4.0" "423335","2","","423333","2021-03-13 00:47:57","","2","","

"If the call for the financial report comes from the web view then the parameters received by the Financial Report Controller (for example date) are probably in JSON format. If it comes from a print instruction on a CLI application then it could be plain text."

You don't pass in either of these to the interactor.

Note the Financial Report Request element; it has a <DS> tag on it; that's short for a data structure. It's either an actual data structure, or a just parameter list that a method on the Financial Report Generator has. The choice there is yours; creating an actual data structure creates better separation, is more explicit and more flexible, but it's also more work.

So, what you pass in is the data in the format specified by the request model. You don't pass in either of the source formats to the interactor, as these are external formats (unless you have purposefully adopted one of those formats as your request model, and accepted the tradeoffs that come with that). The controller module will convert the data to a Financial Report Request (it will either initialize the data structure, or if it's just parameters, it will just extract the values from the source data, and pas them as parameters), and then it will call a method on the interactor (Financial Report Generator) with that data.

"My second question is about the presenters. There is a Screen Presenter and a Print Presenter. If the controller is called to get a financial report, how does it know at the end which presenter to use ?"

This is an inheritance diagram; this is not depicting an instance of a Screen Presenter and an instance of a Print Presenter floating around. In the setup depicted, at runtime, there's just one or the other. The controller doesn't know which kind of presenter it's connected to, and it shouldn't - that's the whole point of dependency inversion. It only has a member variable that has the type of the Financial Report Presenter interface.

Some other code that knows about both (what's called the composition root, e.g., main, or a dependency injection container) will pick a concrete presenter and inject that instance in the controller. Then later on, when the controller has to use it, it will take the Financial Report Response returned by the interactor, and will use it to call some method (or maybe several methods) provided by the Financial Report Presenter interface; this will polymorphically invoke a method on the concrete presenter that has been "plugged in".

Depending on how your application is structured, if the business logic of the interactor (report generator) doesn't require the same instance being shared, in your composition root, you might simply create one instance preconfigured to use the screen presenter to be used for screen-based tasks, and another instance preconfigured with a print presenter to be used for printing.

Or you might use the Composite pattern to combine several presenters into one (derive a Composite Presenter from the Financial Report Presenter interface, wrap several existing presenters). Your composite presenter could have some logic that allows it to switch to which of the child presenters it forwards to. Or, if you wanted to have the ability to output to all presenters simultaneously, you'd indiscriminately forward all the calls to all of the child presenters.

Or, a method on your controller could accept a derivative of the Financial Report Presenter interface as a parameter, so every time you call that method, you can pass in a different presenter.

The setup depicted is pretty flexible, but no design works well for every imaginable set of constraints, so depending on the details of the actual application and on it's change patterns, you might adjust some elements of the design.

for example the company now wants to get financial reports in audio format so you can listen to it if you are blind or can't use a screen, how would you do it ? And what about controllers if the input is now in voice format too ?

Well, I understand this is hypothetical, but, using voice recognition for input brings it's own set of problems beyond just having to deal with another format. However, suppose you already have a module (or even another application) that handles those problems for you - that module (or application) would ultimately output commands in a form of some data structure, that would then be converted to the Financial Report Request, which would then be passed to the report generator. On the other side of things, you'd have an Audio Presenter and an Audio View that would probably include a voice generator, that would accept it's own input (as plain text or some other format). This data would be generated based on the output of the interactor (the Financial Report Response data structure).

"If I have 10 types of reports, does that mean that the Screen Presenter and Print Presenter have to implement 10 interfaces?"

I wouldn't do that, because that will likely lead to a big tangled mess of a class. Don't forget the single responsibility principle (SRP). (BTW, let go of the idea that you "have to" do things in a certain way - you (and your team) are the one(s) making design decisions, you can do whatever makes sense for your specific set of problems, nobody is going to grade you on this. Think of Clean Architecture as of a map, not a prescription.)

How you'd go about this completely depends on the details of your application & business logic. E.g. if some of your reports use essentially the same data, but just show it in different ways (either visually, or maybe aggregate it along different axes, showing different types of summaries), you might opt to have a presenter for each kind of report, and have them all implement the same presenter interface. It's this interface that defines the input for the presenters, but note that, in this design, it's owned by the controller (the controller says "I provide these kinds of outputs, and these are the output interfaces I support; if you want an output from me, implement the corresponding interface"). So you might also choose to have a couple of controllers (e.g., if the way users interact with the system is sufficiently different in different scenarios). Or you might have methods on your controller that accept different kinds of top-level presenter interfaces as a parameter. Or a couple of controllers with such methods. Again, depends on the actual application, and on how far you want to go in adhering to separation of concerns.

You might be thinking that's too much work, but remember, these classes are supposed to be fairly small for the most part, and focused on a fairly narrow task. Another point, Clean Architecture is about structuring the dependencies; it doesn't really matter if all of its elements are represented by classes/objects - in principle, you can do it all with functions. Also, you don't have to get it all right at the very beginning, start with a best guess then evolve over time (that's why I said it's a map). See my answer here to see how you might arrive at a clean architecture–like dependency structure, starting from a typical function with an input and an output.

For reports that are fundamentally different, you might introduce a different presenter interface, and a different set of presenters. You'd also have to consider (or check by experimentation) how feasible your approach is - maybe one of the reports that's based on the same data takes too long to generate, but the others work fine, so you'd treat that one differently (adding a new presenter interface for it). You might also find that some reports involve barely any business logic, so you may decide to bypass the interactor completely and go directly to the database, where you'd rely on an SQL query to transform the data into the shape you need (note that this doesn't break the dependency rule).

Also, you don't have to follow the depicted design to a tee; in particular, if you have functionality that repeats over and over, you can extract/encapsulate it into a new class that you can then reuse. So feel free to introduce new elements into the design, or to split things, etc. (And in fact, there are probably elements that are omitted on the diagram you've posted.)

So, to find the answer, you have to think about what kinds or reports you are going to support, and what's the data you'll be working with, what modules you want to keep independent, etc. And you might not have all the information until the project has lived for some time; if you keep the project evolvable, you'll be able to reshape parts of the dependency structure as the need arises. Also note that not all parts of the codebase change at the same rate; those parts that barely ever change will derive little benefit from a full blown application of design principles; it's enough to keep them decoupled from parts that do change at the boundary where they interact.

","275536","","275536","","2021-03-13 18:14:19","2021-03-13 18:14:19","","","","3","","","","CC BY-SA 4.0" "324326","2","","324313","2016-07-08 03:41:27","","9","","

The std::bitset string-based constructor only exists since C++11, so it should have been designed with idiomatic use of exceptions in mind. On the other hand I've had people tell me logic_error should basically not be used at all.

You may not believe this, but, well, different C++ coders disagree. That's why the FAQ says one thing but the standard library disagrees.

The FAQ advocates crashing because that will be easier to debug. If you crash and get a core dump you'll have the exact state of your application. If you throw an exception you'll lose a lot of that state.

The standard library takes the theory that giving the coder the ability to catch and possible handle the error is more important then debuggability.

Might be exceptional, might not be. The function itself definitely can't generally know, it has no idea in which kind of context it is being called.

The idea here is that if your function does not know whether or not the situation is exceptional, it should not throw an exception. It should return an error state via some other mechanism. Once it reaches a point in the program where it knows the state is exceptional, then it should throw the exception.

But this has its own problem. If an error state is returned from a function, you might not remember to check it and the error will pass by silently. This leads some people to abandon the exceptions are exceptional rule in favor of throwing exceptions for any kind of error state.

Overall, the key point is that different people have different ideas about when to throw exceptions. You're not going to find a single cohesive idea. Even though some will people will dogmatically assert that this or that is the right way to handle exceptions, there is no single agreed upon theory.

You can throw exceptions:

  1. Never
  2. Everywhere
  3. Only on programmer errors
  4. Never on programmer errors
  5. Only during non-routine (exceptional) failures

and find someone on the internet who agrees with you. You'll have to adopt the style that works for you.

","1343","","","","","2016-07-08 03:41:27","","","","4","","","","CC BY-SA 3.0" "324657","1","324661","","2016-07-12 17:20:27","","2","138","

I have been assigned as the lead dev of a newly formed team. I am the only person familiar with the software platform we will be using and the only person to have worked in this domain before. There are 4 other devs 2 are essentially collage grads the other 2 are mid level. I'm struggling to balance my time between planning the approach we will take/delivering a POC and training the other team members. Also critical design decisions need to be made and I am trying to involve the other team members as much as possible but without technical or domain knowledge there is a limit to what they can offer in terms of input.

What advice would you suggest to help get the team up to speed technically and in terms of understanding the domain. I am trying to do fairly in depth code reviews as a group, plenty of discussion/explanation/documentation about the rational for design decisions but I'm finding it hard going! Also I wonder if things like sprint planning/design meetings if I should deliberately take more of a back seat as I find myself doing most of the talking although when I do for periods of time in these meetings there tends to be silence and eyes turn towards me.

NB: I have fed back to senior management and made very clear that the lack of experience is a significant challenge and will impact delivery dates, but I've basically been told there is no money for formal training and to make the best of it :)

","174177","","","","","2016-07-12 17:44:21","Newly formed team who are unfamiliar with the platform and domain how to get them up to speed?","","1","1","","","","CC BY-SA 3.0" "425822","2","","425808","2021-04-27 12:36:55","","7","","

I think you're misreading an opinionated statement as being a literal statement of fact.

I saw an answer on SO which said that just having a class with methods doesn't make it OOP and that it represents Class Oriented Design.

If I really like cars, and I have a distaste for cheap foreign vehicles, I could tell my friend who drives a cheap foreign car "just because it has four wheels and an engine doesn't make it a car, you know", and then my friend then takes this as truth and goes around looking for an answer to what the definition of a car is.

This is the position you find yourself in. You took what you heard as the literal truth and are now on a search for the answer that confirms what you heard.

What you heard wasn't literal truth. It is known as the "No True Scotsman" fallacy, whereby the speaker argues that a commonly understood definition, i.e. Scotsman, should be applied more restrictively. Instead of "person from Scotland", the speaker argues it should be "person from Scotland who doesn't put sugar on his porridge".

The problem here is that the speaker willfully redefines a commonly understood concept just to argue their point. While their underlying point may or may not be valid, silently changing definitions is a sign of poor communication skills as it does nothing but add confusion, just to not have to acknowledge something you don't like (such as sugar on your porridge, I guess).

The same is happening here. (Non-static) classes lie at the base of OOP design. However, the speaker is arguing that one can still use classes badly, and therefore such bad usage is "not true OOP". I don't quite disagree with their underlying argument, but I disagree with their conclusion and how they try to label it.

I'm not saying he's wrong per se, there is merit to his argument, but he's trying to argue about a purer form of OOP and is wrongly calling everything else "not OOP", instead of admitting it's "just not very pure OOP". It's at the very least an overstatement, which sadly detracts from the value of the underlying opinion which may actually contain value.


That being said, I'm not going to claim that I've never pulled a "no true ..." claim, but I'd like to think that I only do so when it is clear that I'm expressing a personal opinion rather than an objective truth, so it is clear that I'm talking about a personal definition rather than a common one.


Feel free to correct me if I'm wrong, but based on your question/comments I surmise you are a beginner who is learning about OOP. It's good that you're trying to think critically about what others say, but beware taking things too literally.
The internet is filled with people who tout personal opinion as if it were objective fact, and especially in abstract fields such as software development, it's easy to be misled by someone's assertions.

","106566","","106566","","2021-04-28 08:26:18","2021-04-28 08:26:18","","","","6","","","","CC BY-SA 4.0" "426193","1","","","2021-05-09 04:00:14","","10","2852","

What are some pros and cons of representing routes as legs or as stops?

A leg is a departure and arrival location, a departure time and a duration (or an arrival time and a duration).

A stop is an arrival time, a location, and a departure time.

The domain I'm modelling is a marketplace where people describe the route they're planning, then, suppliers can bid on routes they're interested in. We have an existing system that is using stops. We don't have any major problems with stops, but I'm wondering if using legs would be better, hence I'm wondering if there are pros or cons I'm not aware of.

I searched for "software design trip modelling", "software design route modelling", "software trip leg stop", and all I found is documentation on a tool by Oracle and model railroad software.

I do understand that legs and stops are duals of each other, but in the database, there should be a single representation. https://dba.stackexchange.com/ is silent on the matter.

Thanks in advance!

","63658","","209774","","2021-06-20 22:59:26","2021-06-20 22:59:26","Pros and cons of representing routes as legs or stops?","","5","15","1","","","CC BY-SA 4.0" "327861","2","","327810","2016-08-08 10:46:27","","4","","

In addition to other answers:

  1. Many companies I've worked for issue their programmers with laptops (based at clients' site - easier to keep equipment safe if taken home after work, being able to do the odd job from home on VPN in a pinch, etc.) Many years ago I already had problems to see on another person's (the "driver" 's) laptop screen from the shoulder surfing perspective - age will not improve this (and some screens become hard to read outside the ideal viewing angle in any case).

    So pair programmers will need sufficiently large screen(s), which will increase hardware cost and limit adaptability to location. May not be a problem for some, in other instances it will be a problem.

  2. I have also found that differences in personal hygiene preferences (including smoking, eating and drinking), as well as personality clashes, are bound to hamper productivity. It's easy enough to tell two programmers to "suck it up and get along", often this will result in people rather keeping their mouths shut and silently sabotaging each other via passive-aggressive actions to vent their resentments of each other.

  3. Noise. I, for one, like a quiet working environment. I can't imagine the constant chatter from some groups of pair programmers (as you need to talk for communication). Even vocal music on my headphones tends to interfere with my concentration (bland instrumentals for office listening...). I guess this can be mitigated by moving away from the ubiquitous open-plan office to dedicated 2-person office rooms, but that's going to drive cost up again.

Anecdotes for your amusement:

  • A previous employer once got a contractor in from another country (all to remain anonymous to protect the guilty). The employer provided accommodation but not transport. Since said contractor lived along my route to work, I got volunteered to pick him up and drop him off again. Let's say his personal hygiene was not on the same standard as what I am used to, and he also smoked heavily ("the strongest!") while I don't. On our 15-min trip to the office I kept my window rolled down - even in winter - which did not prevent my car from smelling like a stale smoking room after the colleague's 3-month stint (no, he did not smoke in the car, but he did while waiting for me).
  • We also did not do pair programming, but sat next to each other at a conference table (for a time). After about a month, there was a nice brown ring on the table's faux wood around the position of the colleague's mouse hand. At that point I got an open desk right next to the call-center open-plan-area, which I preferred (with some help from my headphones).
  • Then there is the ubiquitous office beverage: coffee. Although I do drink it, I can get along without and do not drink as often as other co-workers may. Breaths at close range can be quite unpleasant - similar to the empty forgotten mug smell. Let's call the fragrance "muggy"...
","212087","","-1","","2020-06-16 10:01:49","2016-08-08 10:46:27","","","","0","","","","CC BY-SA 3.0" "330029","2","","329991","2016-09-02 15:50:34","","3","","

Bottom, there isn't a way to know.

For the original question (before philosophical answer): What is the product supposed to do and does it do it? Measuring by defect count / density isn't sufficient. I couldn't tell if this was a library, or an application, how large the code base, how large the problem domain is, nor what the severity of the defects is. For example, not handling one of 123 input formats could be a trivial defect or a show stopper, depending on the importance of the format not properly handled. And better than nothing is a high standard.

Assumption I make for this question: There is a difference between Code and Software. I define software as what a client/user uses to solve a problem, whereas code is the building material of software.

Software can only be measured subjectively. That is, the metric that matters for software is whether people use it to solve a problem. This metric depends on other's behavior, hence its subjectively. Note: For some problems a piece of software may be quite useful, and thus considered high quality (Excel for calculations), but not quality software for a different problem (Excel for playing MP3 files).

Code can (usually) be measured with empirical metrics. But the interpretation isn't 'yes/no' for quality, or even really on a scale of '0 to N'. Metrics measure against a standard. So, metrics can find areas of concern defined by the standard, but the absence of areas of concern is not proof that this is quality code. For example, useful metrics: Does it Compile? No -> Not quality. Yes -> ???. Does it pass Unit Test? No? Maybe? (because, Is the Unit Test Quality Code?), Yes -> ???.

So, like Godel's Incompleteness Proof showed for axioms of mathematics (that is, there exist mathematical statements that can't be proven true or false for any finite set of axioms), I don't think we could ever actually answer 'is this quality code?' for every piece of code. Intuitively, there is probably a mapping in there between software metrics to answer quality and mathematical axioms to answer provably true or false.

Another way to make this argument, is to step into natural language. William Shakespeare, Lewis Carroll and Mark Twain were all successful writers, and beloved of many for the quality of their writings. Yet what standard of grammar, vocabulary, style or voice could we apply that would consistently rate them higher than random 12th graders? And, while it may be possible to create some synthetic measure for those three, how would it rate the Book of John (KJV), J.R.R. Tolkien, Homer, Cervantes, etc? Then throw in Burroughs, Faulkner, Hemingway, Sylvia Plath, and so on. The metric won't work.

","93427","","93427","","2016-09-06 15:58:03","2016-09-06 15:58:03","","","","0","","","","CC BY-SA 3.0" "430098","2","","420748","2021-07-09 16:21:01","","1","","

A microservice is a "what". A cloud function is a "how".

It's possible for a cloud function to be a microservice. It's also possible for a cloud function to not be a microservice. It depends what the function actually does.

Wikipedia has a loose definition of a microservice. I'll go through the bullet points:

  • "often processes that communicate over a network to fulfil a goal using technology-agnostic protocols such as HTTP." - check.

  • "organized around business capabilities."

    Maybe. Depends on your functions! Is it a Billing function, or is it a SendInvoice function?

    Perhaps you have SendInvoice, CalculatePrice, and UpdateBillingAddress functions, and a database, which together are one microservice (the billing microservice).

    Since cloud functions can't store data, it would be extremely rare to have one that is a microservice all by itself with no database. But it could happen! Some people consider their microservices to be stateless and treat their databases as completely separate microservices.

  • "Services can be implemented using different programming languages, databases, hardware and software environment, depending on what fits best." - check.

  • "Services are small in size" - check.

  • "messaging-enabled" - check.

  • "bounded by contexts"

    Maybe. Depends on your functions. See point 2.

  • "autonomously developed"

    This is likely to be the most important defining factor. Do you have a team that works on just this function and nothing else? Or do they work on other parts of the program as well? If the function is just one part of a bigger program, then it's not a microservice.

  • "independently deployable" - check.

  • "decentralized" - check.

  • "built and released with automated processes." - check.

As you can see, cloud functions are certainly a platform on which you can build a microservice, but a thing you build on them doesn't automatically become a microservice just because it uses the platform. It has to actually be a microservice to be a microservice.

","115557","","","","","2021-07-09 16:21:01","","","","0","","","","CC BY-SA 4.0" "430302","1","430315","","2021-07-15 20:30:37","","1","105","

Context

I'm developing togther with my dev team a mobile app in a client-server architecture, since there will be a webclient too, allowing some users (admins) to perform certain operations from the browser.

The REST Api currently authenticates users by returning access and refresh tokens in form of JWTs. Both local (username/password) and OAuth2.0 (only Google at the moment) flows are available, as I provide the user with these two different options for authenticating.

Problem

The flows that follow are working just fine when the API is called from the webclient, but now that we've started developing the mobile app a big question arised: **how do we keep the user authenticated on the mobile app even after the refresh token expires?**

All the famous apps out there do not prompt the user to authenticate let's say weekly or worst daily, but still I'm sure their authentication practices are (almost) flawless.

Tried paths

I've read many blog posts and articles, together with some StackExchange Q&As as reported below, but the right way to approach authentication and access persistence on mobile is still unclear.
  • Should I create a specific endpoint (or many) to provide non-expiring tokens only when the User-Agent header tells the API is being called by a mobile device?

  • In the OAuth case, should I perform (I don't know how) silent calls to the OAuth provider to get back a new idToken and then request new tokens to my own API with it?

  • In the local case, should I keep user credentials stored locally? If so, how do I do that securely?

Some diagrams

These are the flows we've currently implemented, working as espected when the API is consumed by a webclient.

Local


OAuth2.0

","397234","","397234","","2021-07-15 20:53:16","2021-07-16 09:55:42","Mobile authentication approaches, JWTs and refresh tokens","","1","2","","","","CC BY-SA 4.0" "431284","1","","","2021-08-23 09:58:04","","6","593","

I always adopt a practical attitude towards agile & scrum. I am more concerned with customer collaboration, small/continuous release, incremental development than following scrum rules strictly.

I also find that some level of a self-managing/self-organizing team will always emerge if doing agile process (whatever "agile" means here). By self-organizing I mean team members will decide who does what, how to collaborate by themselves.

How to manage the team effectively on such self-managing team? Does the developer management's job/responsibility shift in some way for the self-organizing team compared to non-self-organizing team? I don't think this is a vague or pointless question, e.g. even though scrum is silent on manager's role most of time, scrum.org has an article like "How to Lead Self-Managing Teams?".

I am sure whether being on a self-organizing team or not, the developer manager's job will always include removing impediments, mentoring/coaching team members. Just like The Role of Leaders on a Self-Organizing Team said "There is more to leading a self-organizing team than buying pizza and getting out of the way."

I am hoping to see answers from someone who experiences this challenge. I searched the SE website and I can only find this Q&A Does a mature agile team requires any management? that has something in common with my question. But still they are quite different questions.

I realize I may need to provide a definition for developer manager in case that may also cause some disagreement. I find this article "Development managers vs. scrum masters" from atlassian, in a way the article has answered my question.

BTW, Harvard Business Review published an article called "What Great Managers Do Daily" said "Our data is a start, highlighting some traits of good managers that are actionable on a daily basis. " On daily basis, but how to do on a self-organizing team? This article is one of reasons I asked the question. Apparently when they did the research and published the article they didn't have the concept of self-organizing team in minds.

--- update ---

I come across this article "Why Agile rarely works in the Enterprise", which I highly recommend to anyone who is interested in my question.

","217053","","217053","","2021-10-09 07:47:17","2021-10-09 07:47:17","How to manage the team effectively on a self-managing team?","","2","17","2","","","CC BY-SA 4.0" "431844","2","","431836","2021-09-13 08:46:50","","1","","

Microservices are a much overused pattern by people copying large companies inappropriately. In general, if you can say "I am writing two microservices" (and not "these two teams are writing two microservices") then it's not really a microservice but one piece of software that's been split unnecessarily.

It is a "platform" service used by many other services, such as PaymentService and PreferenceService. Suppose each of those services for some reason have to GET a list of accounts according to some criteria. For example, PaymentService needs a paginated listing of accounts with the fields business_address, tax_id, and invoiced set; meanwhile PreferenceService needs to send batches of emails to accounts whose email field has been set. (Somewhat contrived example.)

The classical microservice approach to that is to split the database. The payments service gets the "business_address, tax_id, and invoiced" columns and the PreferenceService gets "email" (maybe this should be the "email" service instead?). They share a key.

That results in deploying three databases, but you're doing microservices because the number of users you have and throughput of transactions doesn't fit on one database, right?

","29972","","","","","2021-09-13 08:46:50","","","","0","","","","CC BY-SA 4.0" "433386","2","","433376","2021-11-08 15:39:30","","3","","

You've run into an issue that is an ongoing careful balancing act in modern software development. The more you loosely couple your components, the less you can rely on performance optimizations that rely on more than one component at the same time.

For example, if you want to extract a report which links all your Customer, Invoice and PurchaseOrder data; the most performant method would be to write a single query and let your database server optimize its query. This is something a db server tends to specialize in.
However, from a coding perspective, this immediately muddies your loosely coupled components, as there is no clear separation between the three domain entities you've defined.

From a code structuring perspective, you'd rather fetch the Customer, Invoice and PurchaseOrder data separately, so that you can keep these components independent from one another. That's great for your code structure, but your query performance is going to suffer because of it.

I'm using repositories as the example here, but the same applies to your situation. You're stuck between the cleanly loosely coupled services, and a desires to optimize things by coupling your services.

Whether you value performance or code structure, is not something we can universally answer.

whilst I started out with good intentions of high cohesion and loose coupling as soon as the delivery pressure ramped up I began 'code sharing' across packages and have now got tight coupling across the application

There's a saying in my native language, which loosely translates as "the last stones seem to weigh the most".

That is the classic development story. This isn't even development specific, the same applies to most projects people tend to undertake, e.g. building a house, cleaning a house, or painting something.
In the beginning, they start with clean broad strokes, keeping things efficient and elegant. But at the end of the project, when the end is very much in sight and a minor hindrance is encountered, people are much more likely to sweep things under the rug and be done with them. No one wants to redo the foundation when they're already doing the final detail work.

Side fact: the house I live in is a monument to this kind of behavior. I didn't cause it, past owners did (I just rent it), but it very much amuses me as a software developer, and annoys me as the tenant.

This is human nature, and there are justifications for why we engage in this behavior - though you run the risk of regretting your decision down the line. Especially in software development, the unexpectedly long-lived nature of codebases tends to bite you back in the end, which is why good practice developers are so sensitive to bad practice.


The main takeaway here is that you can't realistically expect that you would plan for everything correctly in advance when you first laid the groundwork. Instead, you need to shift your expectation to adjusting the groundwork when the need for it being adjusted presents itself. This comes in many forms.

Clean coding practices help lighten the load when the time to adjust things comes.

But sometimes a late change requires a significant change to the codebase, and this is where you need a PO or team lead who values the clean code enough to dedicate the needed effort and time to it. More often than not, even with the cleanest coding practices, the latter is what fails and causes these kinds of issues, and as a developer you generally don't have the authority to override this. At best, you can hope to sway your PO/team lead/manager's mind and give you the needed time.

","106566","","106566","","2021-11-08 15:44:56","2021-11-08 15:44:56","","","","3","","","","CC BY-SA 4.0" "435454","2","","435452","2021-12-20 13:26:28","","1","","

This sounds like it should be a major version update - it seems that in the previous version of the software a function call would return data relating to e.g. 2019. In the new version of the software the same function call will return data relating to 2020.

So the expected behaviour of the old version is not maintained in the new version, it's not backwards compatible and therefore should be marked as a major version update.

This encourages people consuming the package to choose explicitly which version they need based on which data they are interested in consuming, and not let a package management tool choose automatically for them and apply the update silently.

","23915","","","","","2021-12-20 13:26:28","","","","0","","","","CC BY-SA 4.0" "439261","1","","","2022-06-15 13:16:49","","4","276","

My team owns several services. One is our primary focus: an accounts service. We have a plentiful stream of feature work and tech debt to address there, and everyone (engineers, product, design, management) is aware of its going-ons and going-wheres.

Two other services though, have to do with exporting our data to a third-party CRM, namely Salesforce. While they are important—our customer support and operations teams rely on Salesforce to follow up on leads and generate invoices—they feel like "add-ons" to our team. The services were written years ago by no one currently in the team. No feature work motivates anyone to learn the flow deeply. Except when (a) something breaks or (b) a new feature for accounts necessitates an update in the Salesforce flow to maintain status quo. In both scenarios, only an engineer can practically investigate what work must be done. So that engineer temporarily becomes intimately familiar with the Salesforce flow. But afterward follows a period of "peace"—no one needs to touch these services—the engineer loses familiarity—the PM loses familiarity (if they gained some via discussions with engineers)—and we start all over again, possibly with a different engineer next time.

Sooner or later there comes a time when we engineers need a product decision, whether to resolve a Salesforce issue, or to know how to reflect a new accounts feature in Salesforce. But there's no one who understands the system besides us.

Why are things this way? Is this just a reality we have to accept in the given situation? Akin to a "code smell," is there an organizational problem here? How can we navigate to a better place? Assume that I'm willing to speak to tech leadership for whatever change must happen.

I want to add one last thing: about documentation. There have been several attempts to document this Salesforce flow, but it's as if no one knows how to capture the details in a practical way, and in a way that 6 months later an engineer can fully trust that it's up to date and dependable for investigations and decision-making. "Sometimes customer support converts the lead themselves because it's the only way to add a purchase order number before creating their first campaign." "Customer support has to edit the company name to be exact or else invoices won't be paid." "Until the client fills out their company name, we use a UUID as a stand-in because the field must be non-empty and unique in Salesforce." "The following fields carry over from the lead to contact details." "The business address must be..." "If an account was merged..." It's just... indigestible. A high- or even medium-level document would miss all the details, which is why engineers have to read code to be confident at any point what's going on. A low-level document ends up being too much information to process and impractical to use or trust that it is absolutely up-to-date and correct.

","51082","","51082","","2022-06-16 15:35:57","2022-06-16 18:05:52","Situation where software engineers effectively take role of product (unwanted, but for practical reasons)","","2","2","1","","","CC BY-SA 4.0" "439644","2","","439638","2022-07-05 12:47:29","","5","","

I think the lower coupling, the better. I get you team's point, they're trying to hide the dependecy and keeping it centralized at InvoiceService internals. I also get your point, you're anyway coupling controllers and services by using a service on the controller, but they're looking for reducing that dependency to the bare minimum. Imagine that if you add both services into that controller, changes on the contractService would need changes at the invoice controller, which is far for being intuitive. You could consider it as a bad smell. Hope it helps!

","417479","","417479","","2022-07-06 11:51:45","2022-07-06 11:51:45","","","","2","","","","CC BY-SA 4.0" "440209","2","","440208","2022-08-03 17:53:21","","1","","

As you show in your examples, something has to interact with the datalayer it's really a question of naming and how much you split things up.

Its always good practice to have some, in memory only, operation to deal with any complex logic because it makes things easy to test. But if you have a Domain object for a business operation of some kind, you want its naming and use to match the business definition of the process, which will involve saving to a database.

No one says you can't split a domain operation into multiple sub domain objects though eg.

CashRegister
   Buy(string[] items)
   {
     //create orders
     //save to db
     //work out price
     //create invoice
     //save invoice
     //charge customer
     //save payment
   }
      
SummerSalePriceCalculator
    public Invoice PriceOrder(order o)
    {
       //in memory price calculation, returns a invoice object
    }

VisaPaymentProcessor
    public bool ChargeCard(...)
    {
       //connect to third party and take a payment
    }

etc etc

All these are Domain objects of one kind or another, only the top level one talks to the data layer.

I could take the save stuff out of the CashRegister, but then I would A. Need some other object to to it and what am i calling that and B. If I'm doing DDD I've had some meeting with the Sales team where they have told me they have a process where "a [Customer] [Buy]s [items] using a [Cash Register]" and that "[Buying] means the [order] [invoice] and [payment] are saved to a database".

I have identified these are "domain terms" and I am making my objects names and code match them so that the code matches the way the business talks and thinks

","177980","","177980","","2022-08-03 18:00:45","2022-08-03 18:00:45","","","","2","","","","CC BY-SA 4.0" "440701","2","","440699","2022-08-29 09:01:54","","5","","

You could have rebased before merging or before making the pull requests:

Development work while teammates are on holyday:

*      feature-c
| *    feature-b
| | *  feature-a
| |/
|/
*      new-api
*
*      develop
|

Pull requests/merges after new-api was reviewed and merged:

*     feature-c
| *   feature-b
| | * feature-a
| |/
|/
*     develop
|\
| *   new-api
| *
| *
|/

You use git rebase to move the branches and reattach them to a different base.

  • git switch feature-a
  • git log to find the last commit of (the old pre code review) new-api-branch on which feature-a was based
  • git rebase {commit-hash} --onto develop {commit-hash} is the hash of new-api you found with git log

Note:
hopefully the review of new-api led to some improvements to new-api. When rebasing the commits of the feature-branches they have to adapt to the improved new-api.
This might result in merge conflicts and worse succesful merges silently resulting in broken code.
Don't fear the merge conflicts: you know very well what you intented to do in the commits of the new features and how that should work with the improved new-api.
To have git rebase detect when merging leads to broken code see the -exec-option of git rebase, with it you can have git compile and run the unit tests on your code and pause/break when either detects an error.

Advise:
Use small steps when doing a complicated rebase; don't try to fix all problems at once.
You can use git reflog to find old branch HEADs can use git reset to undo a rebase (when takimg small steps you do not have to redo so much work). Even easier to mark the branch HEAD before starting a rebase git branch -c {my-backup}. Where {my-backup} is a branch name that helps you find it when you want to undo a failed rebase.

","44124","","","","","2022-08-29 09:01:54","","","","5","","","","CC BY-SA 4.0"