Hello. I am in need to design the hierarchy of classes (up to 10 classes), and I am looking for some UML-like tool for this, which would generate the code at the end. There was ModelMaker long time ago (and we licensed it), but right now it's a one-time task, so I'd use something simpler and preferably free. Any suggestions, please?

Comments

  1. Why don't you use your old license of ModelMaker?

    ReplyDelete
  2. Uwe Raabe
    It was 15 years ago :). I believe there should be improvements in various modern offerings, so I wanted to try them.

    ReplyDelete
  3. UML hasn't changed that much the last 15 years.

    ReplyDelete
  4. Jeroen Wiert Pluimers Reminds me of a great Dr. Dobbs article title: "UML 2.5: Do You Even Care?"

    ReplyDelete
  5. Jeroen Wiert Pluimers
    It's not about UML changes, but about tool usability changes and possible integrations (not saying that I don't have that old license anywhere, and I know how vendors feel when asked about ancient versions and ancient licenses, so I don't want to bother Gerrit with this).

    ReplyDelete
  6. Eugene Mayevski I'm sure Gerrit won't mind,

    ReplyDelete
  7. Sven Harazim
    Looks nice, and has extensions for generating and parsing the code in multiple languages, but unfortunately no Delphi there.

    ReplyDelete
  8. Didn't there used to be a modeler built into Delphi at one point? (It was one of the Enterprise or Architect upgrades.)

    UML always struck me as one of those things that big corporations loved to have to document designs, like a database diagramming tool. But unless it was two-way, the class definitions would quickly get out-of-sync with the diagrams, just as happens with docs.

    I draw stuff out on paper before getting started, then once the overall structure is defined, I focus on the code files. I suspect this is how most devs work. I mean, it's not as if people have stopped using OOP/OOD or writing classes! So there just may not be much of a market for such tools.

    ReplyDelete
  9. David Schwartz " I mean, it's not as if people have stopped using OOP/OOD or writing classes! "

    Well, functional programming has been catching on...

    ReplyDelete
  10. Last I checked, you can use functional programming with objects, although it's not clear to me how beneficial UML diagrams are for documenting FP.

    ReplyDelete
  11. David Schwartz with mobile development UML would be of lesser usr as well - too many classes, too messy architecture usually. Othervapproaches are needed. I assume it's the same with web development.

    ReplyDelete
  12. Eugene Mayevski Go with ModelMaker and buy a new license if you lost your old one. There is no better UML-Tool for Delphi. Mr. Beuze deserves it and it is really cheap :-)

    ReplyDelete
  13. BTW there is probably also no better UML tool than ModelMaker for C#

    ReplyDelete
  14. In my mind, UML diagrams are the equivalent of schematics used to show how discrete logic circuits (eg., TTL chips) were wired up. These kinds of schematics were fine for logic designs in the 1970's, but once we got to more complex chips and boards with dozens of such chips on them, the one-line-per-signal paradigm broke down. You had a data bus 8- or 16-bits wide represented by a thick line, and more complex building blocks that were represented by logic paths rather than discrete wires. (Today we've got data and control busses that are 32- and 64-bits wide, possibly wider, all represented by a single thick line! And if there's some kind of signaling protocol to deal with noise or security or whatever, it's just another annotation, like a color or dash-pattern.)

    As Eugene Mayevski suggests, with logic split between two or more local CPUs (functional and display) plus cloud storage and possible database activities that operate locally and/or remotely, I'm not convinced UML is an effective way to model things like that either.

    Nor do I believe that writing code line-by-line is any better. After 50 years of writing software the same way, all we've done is figured out how to put the lines of code into smaller more relevant containers to allow the compiler to help keep us from shooting ourselves in our feet as easily as it used to be.

    Chip and hardware designers can use visual design tools to build circuits and hardware with millions of transistors and tens of thousands of functional blocks in a week, all of which conform to clearly defined interfaces and for which automated testing harnesses can be built with the click of a mouse.

    And yet in the software world, we still have to write code that's equivalent to how TTL circuits were assembled into board designs in the 1970's, with one signal or action per line.

    When will the software community do for itself what it has done for the rest of the world? Create a visual design paradigm that eliminates the need to do 99% of the coding?

    Unfortunately, every time this comes up, somebody points to stupid visual programming tools that are little more than "automated flow-charts", the equivalent of schematic diagramming tools used in the 1970's as well. This is 2016, and not even hardware engineers use crap like that any more.

    Why is it that the software industry cannot pull its collective head out of the 1970's and build tools on par with what architects use to build 100-story skyscrapers, what aircraft engineers use to build jets, and what chip designers use to build chips with 256 CPU cores on them?

    UML may be a nice way for a pointy-ear pin-head manager to document what his team is working on to a VP in a meeting, but it doesn't do diddly-squat to improve the programming process.

    "Coding" has become an anachronism for everybody outside of the software world. Unfortunately, we inside of the software world have not been able to free ourselves from this addiction yet.

    Thank goodness AI is going to come along soon, solving this problem for us. People will simply say what they want to accomplish, the software will ask some questions about input data sources and output formats, and then generate an app that does what was requested. No coding required!

    ReplyDelete
  15. David Schwartz Interesting post.
    I just cleaned up some old magazines yesterday dating back 1997. Found within some interesting AI articles very similar to the once readable today. I do not have too much confidence that this time all the proimises will fullfill - but we will see,
    I understood Eugene Mayevski in such way that he wanted to find some other UML tool, that is why I did not question UML in general. As a matter of fact - I would not know what might be better to modell classes of an object oriented (OO) system than UML class diagrams.
    But you are definitely right - there are always more domain specific modeling languages and if suitable they should be used. But for Delphi being an OO programming language what could be better than UML?

    ReplyDelete
  16. David Schwartz "Last I checked, you can use functional programming with objects"

    OCaml and F# allow mixing of paradigms, but some of the most popular functional programming languages, such as Haskell, are "pure" functional languages and don't offer object orientation. The true believers in each paradigm's camp are always arguing why their paradigm is superior and the other is flawed.

    ReplyDelete
  17. David Schwartz "Thank goodness AI is going to come along soon, solving this problem for us. "

    I'm not so sure about the "soon" part. BTW, have you ever used genetic programming, neural networks or deep learning?

    I remember the first time I coded a neural network, probably in 1994, working from a textbook I'd purchased (it helped that the pseudocode was practically Pascal). I had lots of down time as an attendant at one of my school's computer labs. I used Turbo Pascal for coding and Lotus 1-2-3 to graph results. I fed the network patterns that represented on-off elements in a grid that made up the letters "C" or "E".

    I thought about how I'd write code to tell the difference. I then gave the trained network examples with a block or two falsely turned on or off; it still classified correctly even though the simple algorithms I'd come up with would have failed. I'd trained the network with examples facing left. right and up only. I then fed it down-facing examples and it had generalized enough to still get them right! It was really an amazing feeling I can still recall as I realized the neural network had generalized its own robust classification solution without being explicitly programmed to do so. This feeling was only matched many years later when a machine learning algorithm spat out a few rules about when the horse handicapping algorithm I'd developed's results should be trusted or not. Those rules avoided a lot of losing bets and turned a break-even system into a winning one. :-)

    ReplyDelete
  18. Einstein once said, “We can not solve our problems with the same level of thinking that created them." So why is everybody trying to improve the programming process with better programming LANGUAGES and LANGUAGE TOOLS? Especially when every other field that has benefitted from software has obtained these benefits through VISUAL DESIGN TOOLS, not improvements in the syntax or verbal expressiveness?

    UML is not this, nor are two-way tools the solution, IMHO. They're nice for maintenance work, as long as you don't monkey with the code; but once you start touching the code, they pretty much self-destruct. When people complain about the quality of ASM code being generated from common coding expressions, the thought of not being able to tweak inefficient generated code seems unacceptable. Except it's still way faster than interpreted code, which is outpacing the use of compiled code in production environments.

    For 20-odd years, we had two or three major platforms that presented fairly homogenous development environments that took program text and compiled it down to native executables: IBM's mainframes, Microsoft's dominance with Windows and C#/.NET/MSSQL, and the broad Sun/DEC/HP Unix platforms.

    Today, things are fragmented all over the place, and there's no sign of anything moving back towards cohesion. If anything, the fragmentation is increasing because the cost of acquisition and adaptation is virtually nil.

    The fastest-growing language / platform today is the newest: Swift. A language that didn't even exist 3 years ago. Apple open-sourced it and everybody is tripping over themselves to capitalize on it in Apple's world.

    Except in the Android world, it's Java and Go.

    And let's not forget about php, python, ruby, perl, and all of the derivatives they've led to.

    For most of the past 40 years, nobody would seriously consider using an interpreted language in a production system; now servers are running a half-dozen interpreted languages with more being spawned regularly.

    Many of these languages have lost their OOPness by supporting just a handful of generic types and dynamic collections of whatever you want to attach a symbol to. Just try documenting THAT! HAH!

    Heck, PHP added classes in version 4, but still today most PHP devs refuse to build classes claiming they kill performance. Nobody cares about maintenance any more. I guess they assume the next "coder" will just rewrite everything in another language.

    (Why do companies who hire people to do maintenance work ask for coding examples and demand to see programming proficiency, when these new-hires will spend 90% of their time reading and reverse-engineering existing code, and then writing a few lines of code to fix the problems they're given? I've had a couple dozen maintenance jobs since 2000, and not once was I asked to study some code and explain what it did. Yet that's what I spent the majority of my time doing once I was hired.)

    These days, the stuff on the back-end responds to REST end-point requests, regardless of what it's written in, while the front-end designs have no interest in how the back-end data is being managed, or even if it's highly distributed or all on a single server.

    UI elements originally intended to support HTTP page refreshes have become micro-managed through javascript libraries that completely bypass normal HTTP protocols and do most of their work through side-channels. We're back to "programming through side-effects" again.

    Delphi itself is interesting in the fact that it works best if you're using it to build all of the parts of the system.

    ReplyDelete
  19. Oddly, there are more interesting products that "look and feel" like Delphi, yet generate php back-ends or javascript that runs within a web browser and leverage existing js libs that are popular today. It's strange to me that this technology isn't sold or controlled by the people who own Delphi.

    Delphi's answer to this, HTML5 Builder, hasn't been updated in 3+ years, even though php and HTML+js libs have been rapidly evolving over the same time period.

    Meanwhile, companies claim to be searching for "rock stars" and "All-star coders" because ... there's not a single documentation paradigm that encompasses any of this complexity today. Nor any visual design tools or aids or anything. If you're not up on the latest toolstack a company is using, you won't be hired to work with them. Your familiarity with quirks of specific libraries is more valuable than your de facto coding skills.

    We here in the Delphi world love to complain about subtleties in syntax and how the IDE operates, while most of the "leading edge" activities in the world of software development have totally abandoned most of this stuff. Their devs have been hacking their way through the "mobile-first" wilderness laying down paths defined by new interpreted languages, tools and paradigms along the way. They fix technical issues by patching the language interpreter or adding on new bits to their run-time libs, and publishing it all to the world for everybody to share and build upon.

    I'm really having a hard time wrapping my head around any real future Delphi might have given that the rest of the programming world has abandoned the business model that supports Delphi's language, its core libraries, and its visual design paradigms.

    Work in the Delphi world is extremely hard to come by these days, and the vast majority of it is maintaining 5-10 year old code. A huge percentage of this code isn't even using Unicode yet. If there's any new project development going on using Delphi, nobody is posting job requests for it. Even C# devs are a dime a dozen, but at least there's more work to be found.

    In the world of programming platforms, Delphi might be the most efficient thing available. But in an industry driven by open-source self-sufficiency with zero cost of acquisition, Delphi isn't on anybody's radar. Nor do the people who've been maintaining it seem to recognize that this market even exists and is slowly forcing Delphi into oblivion.

    NOTE: this missive ended up taking a direction I had not expected. I guess it's a rant of sorts, but it's really a simple reflection of how a lot of us are clinging to old-style programming paradigms embodied in UML diagrams and a platform 100% controlled by a single development organization, vs. how the industry seems to be evolving. Even Microsoft has seen the light and cast their tools into the open-source environment.

    ReplyDelete
  20. David Schwartz I cannot really follow your arguments all the way. What you write about UML might be correct considering Model Driven Architecture, but considering just UML I do not understand the problem with 2-way-tools. At the end my UML modelling is usually done regarding the interfaces and not the implementation details.
    I am very confident that there is a bright future for Angular2 frontends working together with my Delphi REST backends - so sorry - despite of the longest message I have ever read here (yours) - I do not get your point.

    ReplyDelete
  21. All software is simply a simulation of something. My point revolves around our dependency on writing lines of text as a means of describing (explaining) what we want a particular simulation to LOOK like when it's running. Most of what we're simulating today is visual. So we write lots and lots of text to describe how something should LOOK.

    Take five different fashion models (men, women, kids, whatever), and set about describing their looks in words in such a way that allows readers to reconstruct what each one looks like in their minds. This is how we write software today. This is essentially what "coders" do, after "designers" explain what things SHOULD look and behave like. And "maintenance" people track down places where the description (in code) ends up yielding something incorrectly, either visually or behaviorally.

    As someone who's been programming since 1972, the fact that we're standing here in 2016 using the exact same process I learned that was called "programming" back then, seems insane to me!

    Most other fields have used software to create visual design tools that let them describe VISUALLY what their VISUAL models are intended to look like. The Delphi IDE gets us halfway there, but there's still a huge dependency on text to handle most kinds of interfaces and behavioral descriptions.

    UML is a diagramming language that helps organize our textual descriptions. It's like a schematic diagram for the inside workings of our programs. It exists as a symptom of the fact that we have no way to build software visually yet; we still have to write textual descriptions that either transform/translate data, move it from point A to point B, and/or recombine it in various ways (often in time-dependent ways).

    REST is an interface that's used to get data from point A to B, what I like to refer to as "plumbing". If you need to modify it, because the front-end needs some additional data, you change it's "shape". Both the front- and back-end need to have the exact same "shape" or the connection will "leak", just like if you have two pieces of pipe that fit together and they have slightly different shapes.

    In your example, the Delphi back-end is one technology; the front-end is built in a totally unrelated technology (Angular2) and platform; and yet both sides need to ensure they specify the exact same "shape" of the pipes that connect the two. I'd bet that your UML tool works for one side but not the other.

    As it happens, a relatively large portion of programming concerns the plumbing and connecting the pieces at each end.

    This effort should be invisible, IMHO.

    Data-aware controls in Delphi have made the plumbing that makes them work completely invisible.

    The plumbing that realizes the visual components that are laid out visually in the IDE is invisible.

    The plumbing that makes COM objects work is pretty much invisible.

    Look at where code is being written today ... a huge amount of it is dedicated to making plumbing work between two things that don't know about each other, and there's no standard interface specified -- or specifiable. (Consider at how many different attempts have been made over the years, yet they seem to come and go like so many fashion fads.)

    Look at where the bugs are in code ... a very large portion of them, in my experience, are in the plumbing that's moving data from point A to point B, often translating / transforming it in the process.

    The thing is, I have a very clear sense of what I'm seeing in my head, but I'm having a very difficult time expressing it.

    This is actually quite appropo because this is the exact same issue that leads to bugs when we try to get things out of our head into program text to make software work. Sometimes it just doesn't get expressed accurately or completely or directly, and the result is somewhat of a mess.

    ReplyDelete

  22. I wish there was a way to express what I see in my head in some way other than a bunch of text.

    Which pretty much sums up my whole point about programming today. We're STILL forced to describe stuff in words because we have not bothered to design a better, more visual design paradigm for what we as software developers actually do.

    I don't know how the folks at Pixar can have created software that allows them to build 90-minute full-length animated movies using a visual IDE where some of the "objects" they create interact visually in ways that accurately reflect the laws of physics (to our eyes), and yet we -- the very wizards of software -- can't figure out how to make the plumbing between a Delphi back-end and Angular2 front-end (as a simple example) completely invisible and adaptable to changes made on either side.

    Yes, I realize I'm over-simplifying this analogy. But ... Pixar DOES use highly visual software tools to produce amazingly photo-realistic movies that conform to tons of independent physical models simultaneously, with teams of people using tools that are light-years ahead of what we ourselves use to produce our own apps that pale in comparison to the complexities Pixar's tools are able to model. I'd assert that most of their "plumbing" is totally invisible. So Shrek always looks like Shrek, and Donkey always looks like Donkey, regardless of the context. Gamers today are even able to make apps that do similar stuff in real-time that Pixar does mostly off-line. Yet in our own micromanaged data domains, we end up spinning our wheels and wasting cumulative hours of time when there's a mismatch between, say, a single integer data item that's represented as a 32-bit int on one side of the pipe, and a 64-bit int on the other. It's like reading a lengthy textual description trying to figure out why one model's lip gloss isn't as "shiny" as expected in certain images.

    We've got lots of battles going on in our field, and even in the Delphi world. But I think we're fighting the wrong war. That's mostly what I was alluding to in my previous post.

    ReplyDelete
  23. David Schwartz Cool. I understood now much better and think you are right. The lets call it with Varella "structural coupling" in our programming is often too weak. I think you are right on that one - I even found an interesting thesis on this topic http://www.framsticks.com/files/common/MSc_Hoffmann_StructuralCoupling.pdf.

    ReplyDelete
  24. Roland Kossow that's an interesting paper. But I don't think we need AI to address this stuff. Eventually, it probably will, tho.

    The software field has characterized lots of great specification languages -- DTDs and DSDs in XML come to mind -- but nobody seems to want to stick with any of them for very long. Now JSON is popular, and there are even efforts afoot to create DTD specs for JSON packets. But people see them as "inefficient", and the platforms don't enforce their use or even make them easy to maintain.

    The world of building materials that's available to building architects, for example, has 100+ years of history to draw upon. There are standardized shapes and sizes of pipes, beams, planks, panels, windows, risers, vents, ... the catalogs of available items are extensive and continue to grow. It's possible to build software that allows architects to fit these things together in well-defined ways, with simple rules that can be checked to ensure that two things can actually interface properly. Ditto for aircraft, chips, whatever.

    I mean, you can't tell me an aircraft CAD system isn't able to deal with an avionics system in an aircraft cockpit because it was designed by a third-party!

    But OUR software cannot even begin to deal with a GUI written in javascript connecting to a back-end REST-based server written in Delphi or Java.

    I'm like ... WTF is the problem with people in our field? We can debate about the most arcane aspects of whether there's one nop or two needed to carry out a pipelined floating-point operation within a loop, but we can't come up with a way for two pieces of code on separate platforms using the same JSON/REST interface spec to work together? (BTW, the FPU is literally a separate computing platform from the main CPU! And especially now that we're using GPUs as co-resident array processors, there are even libraries that let us manage those resources intelligently, even though there's not even an IDE or dev environment of any kind running on them.)

    While I can't say for sure, I'm willing to bet that Pixar's software works by allowing people to build catalogs of shapes, behaviors, and interface definitions that their design environment allows to be combined together in well-defined ways, without having to know anything of the existence of how any of these things are actually implemented (ie., rendered).

    Hmmm ... sounds like components to me...

    But why can we do that with the things we're working with ONLY when they exist WITHIN or directly accessible to the development environment we're using?

    Why do we have to keep writing the same kinds of stupid code over and over and over again, simply because there's no way to characterize things that exist outside of our immediate IDE?

    It's not just that. We can't even prevent termites in our own code!

    I worked on a (Delphi) system a while back that had around 450 calls to a back-end through a crusty old API. The original developer employed copy-and-paste techniques and you could see when he discovered something new where the code templates changed subtly. The odd part was ... there were 450 separate places where these calls were made that set up parameters, passed them through the API, got back a result, parsed the data, and tore down the temporaries used for the call, then passed on a flag or result string of some kind.

    Technically speaking, there were exactly two API calls. From a business-logic standpoint, there were around 45-50 truly distinct calls being made. The others were all duplicates that were never factored out.

    ReplyDelete
  25. While fixing some other issues, I discovered a pattern that led to identifying six specific coding errors. Subsequent searches of the code (all 450 instances of these calls) uncovered over 100 instances of these errors that nobody wanted to hear about. They were very subtle data-sensitive bugs that resulted in intermittent errors, and there was no way to build tests to detect any of them. With proper refactoring, they would have all disappeared, and the whole mess could have been reduced to about a dozen components with standard interfaces.

    But this company was allergic to refactoring, and actually fired me rather than allow me to fix these issues. (This buggy code was being presented to a US Govt Agency as "bug-free" based on the definition of "bugs" as being characterized and submitted exclusively by the user organization. What us developers found ... just didn't count.)

    The problem in this case was that 100% of these errors were "plumbing" issues that had no reason to exist in the first place.

    When a lot of programmers are in a hurry, copy-and-paste seems to always take precedence over spending a few minutes to define a clean interface and factor out common functions. With no explicit sort of mechanism to define "plumbing" code, there's no incentive to deal with it.

    I'd assert that it's errors like these in "plumbing code" that account for a majority of technical debt that accrues in most software systems.

    (Today, "rockstar" programmers are folks who can crank out shitpiles of stinky code like this in their sleep. Nobody wants to discuss technical debt.)

    I guess the question is, who benefits by building a software dev platform like what Pixar has, but that's used for building software from a huge catalog of parts, many of which are non-proprietary and open-source? (And don't tell me this is what Delphi's GetIt service is for. It's a LONG WAY from what I'm talking about!)

    The US Dept of Labor has estimated a general shortage of a million programmers by 2020, based on current programming technologies -- which haven't changed in 60 years. I'm sure AI will solve that problem, and perhaps we'll skip over the whole visual programming thing altogether.

    Meanwhile, Zapier seems to be solving the general "connect anything to anything" problem at the web API level (mostly JSON/REST). And even THEY have figured out how to monetize it!

    (Would someone like to hire me to solve this? It's a fascinating problem and one that I believe can be solved without too much difficulty. All it requires is a fresh approach and the willingness to stick with it.)

    ReplyDelete

Post a Comment