2013 m. gruodžio 17 d., antradienis

Fedora 20: recover stored passwords, in case you lost them

After upgrading to Fedora 20 (I upgraded to RC before the final release), passwords I stored to GNOME Keyring (Keys and Passwords application, also known as Seahorse) were gone. This can also affect some applications, in my case it was Empathy.
The problem is that with this release GNOME Keyring stores passwords in a different place. It used to be:
now it is

The solution is quite simple:

  • open Keys and Passwords app
  • lock login keyring
  • copy keyring files from the old location to the new
  • relogin

2013 m. lapkričio 30 d., šeštadienis

How to make good API

Good API is a large part of success. Sometimes it's the nearly only factor, why library/solution was chosen. It's not easy to create a good API and there is no single recipe for that, since different cases have different requirements. Everyone who has used many different libraries across several different programming languages should already have a feeling for what is a good API. I'll try to summarize main points here.

  1. Flexible but convenient
  2. Flexibility is something everyone understands, unfortunately convenience quite often forgotten. Some use cases are frequent, others are not. One key to success is to have dedicated API for most common use cases along side the general more flexible API.
    For example, consider a very simple Person class:
    • You need a no-argument constructor to create empty object, that will be filled later
    • You need methods to get/set first and last name
    • You need methods to get/set a list of middle names (because some people have more than one)
    The list above makes a flexible API. Now let's go to the convenience part:
    • A constructor, that takes first and last names as an argument, because that's what most people have
    • A method to set/get middle name, because very little people have more than one
    In some cultures most people have middle name, so if application is specific for such culture, replace two argument constructor with the one with three arguments.
    The point here is not to limit API to the basic all-cases set, but add additional APIs to make shortcuts for common use-cases.
  3. Flexible but not bloated
  4. This one is part of first one, but it's so frequent, that it deserves to be a separate point.
    Most APIs are designed in a "what-if" way, however you have to stop in time, because there are no limits to "what-if". There's no clear recipe here, since it all depends on exact domain. But there are few guidelines:
    • Something that very high impact and/or hasn't changed for a long time is unlikely to change without early warning
      • i.e. IPv6. We're moving in this direction so long, because too many software were "hardcoded" for IPv4. Were they wrong? No, they saved a lot by doing so. Right now you can chose to either support these two or to make your system flexible to support a growing list of different formats. Is the later worth the additional time? It's up to you to decide.
    • Convenience classes and method are only convenient as long as it's clear what they do and what's the difference between any two of them; here are couple of bad examples:
    • String is simple and very flexible data type
  5. Names should be meaningful, guessable, structured, consistent and short
  6. Naming is one the most important things in API. While meaningful is the most emphasized one, it's far from being the only one. Although most of the time programmer reads code, he also writes it, so being able to guess a name improves his productivity quite a lot, especially accompanied with code-completion.
    Structuring API is very important, when API is large. A good example of how not to structure your API is Window API, while GTK+ is an example of good structuring. In short: your API should have something that separates it from the rest of the world (namespace, package, ... or simple prefix). Large API should be divided into submodules etc.
    Another things that make names guessable is consistency. Naming conventions should be consistent across the API, preferably consistent with other APIs in the same field, domain, language etc.
    Finally, names should not be longer than needed. While longer usually means clearer, there's always certain point beyond which length no longer adds clarity. I.e. MAX_INT is perfectly fine name and making it MAXIMUM_INTEGER_VALUE adds no additional value.
  7. Performance oriented, but convenient
  8. Some APIs are not meant to be called often, others should think about performance. However, convenience should be maintained. Windows API is an example of API, that sacrificed convenience for performance. What it lacks is functions to fill various structures with some default values. It's really annoying to set every struct member. At the same time it is bloated in amount of functions, there are often several functions instead of one. I.e. FindFirstFile() and FindNextFile() could easily be just one functions and who really needs ZeroMemory() when you can use much more powerful memset().
  9. Convenient defaults
  10. The less user has to specify explicitly, the better. Most of the time...
    Default values should be intuitive and meaningful. Otherwise it's better to require to explicitly specify the value.
  11. Carefully chose data formats
  12. In short - don't just blindly use XML.
    If you're making Web Service, think which format for data is most appropriate. It might be XML, JSON or anything else. Remember, that someone will have to use it and it's for their sake.
    It's also important for configuration files. Forcing people to write XML by hand is... well don't use XML when something simpler works just fine.
  13. Configurable, but not over-configurable
  14. This one applies to frameworks. Many of them allow user to change the behavior via some setting in configuration. This is good, but there are limits. First of all, it should be clear which configuration is valid and which is not. The more settings there are, the harder it is to list all allowed combination, not to mention that they should work! "Everything is pluggable, extensible and configurable" is never the answer as it will result in huge and buggy mess. No setting is better then a non-working one.
    For complicated cases white-box can be used: instead of super-configurable black box component have a component composed of smaller components - when limits are reached, user can compose his own component reusing parts of original one.
  15. Synchronous vs. Asynchronous
  16. Asynchronous API is good for operation that might take long to complete. Ideally all long taking operations should be asynchronous.
    At the same time, every asynchronous API is only good if it provides synchronous alternatives. Strange? Asynchronous is good for UI as it makes it responsive rather than hanging it. But when you're already on on non-UI thread, you want to perform sequential actions synchronously, rather then messing up with asynchronous continuations. Since you can't predict where which API will be used, it's better to provide both to the user and let him decide, rather than pushing one or another down his throat.
  17. The are non-English speaking people out there
  18. If you have UI, it should be localizable. If you throw exceptions or otherwise report errors, error messages should be somehow localizable too, either provide error codes along messages, or localize messages themselves.
    Finally, if you have no idea about localization, don't implement it without consulting someone who does. A good start is GNU gettext manual, especially the section about plural form, to get some grasp what you're dealing with.
  19. Backward/forward compatibility
  20. Ideally every new version should be backward compatible with all previous ones, but in practice it's almost impossible. The guidelines here are:
    • Design API from beginning to be extended in future. Be especially carefully with boolean arguments (enum is a good replacement). In C using opaque structures is a good way to provide extensibility in the future, avoid reserved members/arguments, because you can end-up with something like this
    • If future features are known or very apparent, prepare API for them in advance (forward compatibility)
    • Avoid breaking backward compatibility of API, but don't add workarounds to API itself, because later you'll have to be backward compatible with those too
    • Prefer big breaks to often ones: no one likes API breaks, but often breaks seems to be hated more; when you break API, use the chance to fix all known shortcomings
    • Be clear about your API stability, so that users know, when to expect breakages; major releases are good candidates for breaks, subreleases should be backward compatible
    • It is also good to have alpha/beta testing, where new APIs are introduced for users to try, but are not yet stable and might change in final release, user feedback is best way to determine shortcomings
    • Be realistic, if your thing will survive long enough, you will, eventually, have to break it's API
  21. Convention over configuration is dangerous
  22. The point is to save developers time. In practice, not in theory! Just because developer writes less code/configuration does not mean he actually spends less time on it.
    When things work auto-magically, they sometimes don't work in the same magic way and finding out why can be very time consuming.
    Besides, changing convention is very painful, as it's less apparent (everything compiles as it did, but it doesn't work as it did, happy debugging).
Bottom line

Great APIs are not designed behind closed door by few system architects. Constructive communication and involvement from many parties from the beginning is the way to understand the problem.

2013 m. spalio 20 d., sekmadienis

Slam them hard

Someone said your language sucks? Slam them back!
  • C/C++/Java
    switch control structure is terrible because of one small issue: fall-through by default. Making break the default behavior and using continue to fall-through would make it perfect. But, not to be...
  • C
    1. Where are the simple data structures like list?
    2. Working with strings?
  • C++
    1. Changed widely used header by accident?
    2. Oops, forgot the copy constructor again?
    3. Great new compiler, now template error messages are only two pages long!
  • C#
    1. Works on every platform as long as it's Windows. Mono? Have you at least tried it?
    2. When invoking delegate, you have to check if it's not null! Why?!
  • Java
    1. Need to return two values from method again?
    2. When compiling Java became faster than C++, then Maven was invented...
    3. Java <=1.4: who needs templates, store plain objects in collections, Java >=1.5: generics!
    4. Java <=1.7: interfaces are absolutely abstract, anything else would be insane! Java 1.7: great news, you can now provide default implementation in the interface!
    5. Terrible slowness on a large scale and you can scale it even further!
  • Python, JavaScript, Perl, Ruby
    1. Mistyped variable again when assigning?
    2. Added extra argument to widely used function?
  • JavaScript
    1. Forgot to write var... in two places...
    2. Object is function and function is object, but you can't call this function, because it's not a function
    3. Fragility of C + the power of C++ + extra ammunition

2013 m. spalio 4 d., penktadienis

9 reasons not to use XML

When I have to write code that deals with XML, at some point I wish it did not exist...

NOTE: I don't say "don't use XML", I say "think twice before doing so", because simple things should be simple (and XML is unlikely to keep them simple).

Reason for writing this: I've been working with Windows 8 tiles for over two weeks, I need to release some steam...

  1. XML is for storing data, not writing code
  2. Yes, this one is dedicated for Spring, Struts, Ant, Maven and co. You can call it "declarative", "configuration" or anything else, I say "it's code". And XML is terrible for writing code: difficult to read, a lot of extra characters to write... Maybe it's easier to parse, but since when do we design language for compiler/interpreter programmers?
    BTW, you can step through the code using debugger...
  3. It's very difficult to manipulate without dedicated library
  4. Libraries exists, more than enough of them. But quite often modifications you have to make are so trivial, that simple text find&replace seems to be the best solution. But you can't do that, because you have to make sure the string you put to XML does not have characters <, >, ", /, &,... Simply to many of them and who can tell them all without looking somewhere? So, no tools like sed or string methods like replace() for you, use fully featured library to change those two attributes...
  5. It's hard to work with even using dedicated library
  6. Marshaling objects to XML or unmarshaling from is fairly easy, full-featured frameworks are available for this. But it gets more interesting, when you only want to change a couple of nodes and XML document is not made by you. Doing full marshaling is not reasonable: a lot of code and bad performance. To add/remove node or set attribute is not as trivial as you'd wish. Simply string find&replace is out of question, while searching for nodes is either fragile or requires quite a lot of checking. Debugging such code is never fun.
    The only simple way I've seen is manipulating dom in JavaScript, but you're not always lucky to use such dynamic language. With C++, C# or Java this is much more messy...
  7. Slow to read, slow to write
  8. Complex syntax make XML parsers slow compared to the ones for other formats. Formats like JSON or YAML are faster in this respect as well as they are more human-readable. Not to mention simple key-file or INI file formats, which are enough way more often, than they are used. And they are easier to parse by hand than it is to read XML using library...
  9. It's only suited for tree-like data
  10. If data is of simpler structure than tree, say, list of records, XML is too complicated for it. Record-Jar format is more suited for this. The same applies for data, that is more complicated than tree, like graph.
  11. Too powerful for most use-cases
  12. What is the difference between child node and attribute, when to use one or another? Do you need entity reference? Do you need schema validation? Do you need XSLT?
    When you look at what you need and answer "no" to almost every advanced feature, why mess with this format at all?
  13. Difficult to embed into program code
  14. Most C-based programming languages use double-quotes for strings, which are also used for attributes in XML. Happy escaping!
  15. But everyone knows it
  16. That's a valid argument. But you can also say that everyone goes to dentist, so it should be something everyone likes... The fact that XML is industry standard does not make it good, it only means that industry chose this format because they though it was better than alternatives at the time. It does not mean that it still is and it does not mean it is good for the problem you are solving ATM.
  17. XML for UI: from nice to ugly
  18. UI is one place where XML has proven to work well. No surprise, UI is tree-like, so XML just fits right here. More - widgets have properties and children, so attributes and child nodes do the right thing. Application supports plugins? Several XMLs can be merged, Firefox is a good example. Web pages are XML (well, almost), etc.
    Now about the ugly part. Do you manipulate raw XML in the code? Tiles in Windows Store Apps require XML for notifications and badges. The first is ugly - you get predefined template and manipulate it using XML manipulation classes to insert necessary attributes. With badges it's much, much worse. They only support numbers and predefined glyphs (identified by string constants!), but to put a badge you have to prepare the full XML document! F*** you very much.
And now the enterprise hell falls onto my head...

2013 m. liepos 5 d., penktadienis

Gnote runs under Wayland!

Something I coded this evening/early night. Now Gnote can be run under Wayland!

Global keybindings do not work and Gnote no longer moves windows between workspaces. You can setup global keybindings in Gnome Control Center, there are command line options you can use.
To get entirely old Gnote, pass --with-x11-support option to configure.

2013 m. birželio 19 d., trečiadienis

The dangers of C++ smart pointers

I've spent an entire day debugging a single segfault in Gnote! Luckyly it only occurred "sometimes" when closing application. It was an issue with C++ smart pointer, Glib::RefPtr to be exact. Here are some dangers you might encauter with all types of smart pointers:
  • Mixing smart and simple pointers is never a good idea, SPs only trully work when used entirely everywhere
  • Make sure you know how pointer WORKS, look at it's source code, if required. Assumptions is evel everywhere, but here they can bring the house down (as could have happened in Gnote)

On issue in Gnote
Glib::RefPtr is designed to point to descendants of Glib::Object, but is not limited to them. It requires that object has methods reference() and unreference(), which do reference counting and, if necessairy, delete the object. The problem is, that RefPtr constructor expects object to have a counter of 1 and does not call reference() (probably related to how GTK+ works). However the destructor does call unreference(). So, if you create RefPtr passing an object, that's already referenced by other RefPtr's, objects internal counter is less than actual number of RefPtr pointing to it. As a result when all RefPtr's are destroyed, you might get crash on some later of them with some nice time to chase this bug down :)

2013 m. birželio 9 d., sekmadienis

Programming best practices I disaprove

  • Class imports instead of package imports
In languages where it's applicable, it's usually recommended or even required. It has a point in C++, where less #include's means faster compilation time, however it only makes sense for a large project that takes long to compile.
For languages like Java it makes little sense. Compilation time is the same, all you really get is that you have to put in effort maintaining your imports. While IDE can help you here, it is still and extra click/key press over and over again, it's extra code changes in your VCS and clutter for code reviews. What for? Some bureaucratic claims of nicer code, nothing more.
  • Code-to-interface
This practice requires to define interface and make code depend on it and not on implementing class. Reasons are clear: easy to create second implementation, easier to test (really?), more enforcement on a proper use of class.
The problems occur, when an overuse kicks in. In my view interface is good, when there is more than one implementation of it. In other cases it's just extra piece of code that causes you problems and usually is there for no real reason. Besides, refactoring class to interface and implementation usually is not hard, in fact rather trivial, so why write interface in advance, when you can add it just when the need arises?
  • Banning any language feature
In corporate coding standards you can find various rules like "use of goto is not allowed", "ternary operator is not allowed" etc.
If language has feature that's not deprecated, why someone should not use it? The use must be proper, of course! Even an ugly thing like goto can actually make code more readable and understandable, just you have to use it in right place and in the right way.
  • Setters/getters, no public fields
That's the most Java-way, public fields are generally banned in Java, everyone just codes setters and getters without thinking. It's even enforced by frameworks etc. Each time I see a five year old class with nothing but private fields and public trivial setters and getters for each of them, I wonder why? To write more code? To prepare for change that's unlikely to happen in a lifetime? And if change happens, are you really ready? You haven't declared throws on a setter, so how are going to add potential validation in future? Yes, you can add unchecked exception, but is existing code ready for that?
  • Single return
Some claim that multiple returns make code more complicated. I find it quite the opposite. When function/method has to do some cleanup before exit, then probably using single return makes it simpler, but in other cases it make it more complicated, adds extra if's and nested blocks etc.
  • Separation of concerns as much as possible
The key point here is "as much as possible". Some people take this to extreme, even beyond the line of sanity. It's fine to chop large and complex problems into smaller and simpler ones, but it should not create new problems. When you chop tree into toothpicks, what you get is a crap. Now you put some design patters to put some pieces together, then another patterns to tie those... Eventually you end up with a crap of patterns and start looking for more fancy ways to implement them...
Some things are interdependent, you can't actually separate them, attempt to do this artificially will only cause you more problems.
Some problems are just hard and you can't simplify them by cutting them in peaces.

To end this on a positive note, here are what believe to be universally best practices:

  • Everything you do, do it for a reason
  • If what you did doesn't meet objectives - undo, replace or fix it
  • Throwing junk away is usually the right thing to do, whatever the cost of that junk

2013 m. gegužės 17 d., penktadienis

Bug flood

Someone opened the flood-gate! Just in one day 11 new bugs were opened for Gnote. Something I've never seen before. That's something like 25% increase in Gnote bug count. The good news are that almost all of them make sence and are quite trivial to fix. Well, enough writing, time for coding!

2013 m. kovo 10 d., sekmadienis

Mercurial vs. Git (every day use)

A common comparison you can often find on the internet. Many claims to either side. I'm personally regularly exposed to both. I won't hide: I prefer git and I'll try to explain why.
Edit: almost half year since original post, more experience with Mercurial, so I decided to add some editions, that bring Mercurial closer to Git. I still prefer Git, but the margin between them got significantly smaller since original version. All editions are marked bellow.
Heads vs. rebasing
One of the first things you'll encounter: you commit some changes to your local repo, but push fails, because someone else has pushed before you. In this case Mercurial and Git work differently. In mercurial you pull the changes and end up with two (or more) heads. Heads are actually branches, they're only unnamed. What you have to do is to merge them, commit and then push. You can get conflicts during merge, which you have to resolve.
In Git you execute git pull --rebase. Git rolls back all you commits, pulls the changes and then recommits yours. Each of your commits can fail due to conflict, you have to resolve it then resume rebasing. When done, you can push.
At first glance Mercurial seems simpler. However, this simplicity has one significant drawback: heads are actually branches, that end up in your repository (central, where everyone pushes). As a result your history has a lot of branches and merges, which can be difficult to track, if you have many developers pushing to the same repository. In Git you only have branches you explicitly created, all commits to one branch have a nice linear history, which is much easier to understand.
Edit: there is Rebase extension for Mercurial, which let's you work the same way as in Git, I highly recommend you to use it, nice history is worth it!
Staging area makes everyday work easier
Git exposes staging area to the user, while Mercurial doesn't. Again, on paper Mercurial looks simpler - you only execute add for new files, it's enough to execute commit for changes. With Git there's no difference, whether you change or add file, you have to execute add on it be fore you do commit (there is a shortcut: git commit -a).
In real life I find Git simpler to use! Why? I like to check the changes before committing, so I execute diff on each file. If it's fine, I execute add on it. This way Git tracks reviewed files for me. With Mercurial I have to keep it in mind. Also, sometimes I change a lot of files and then want to commit them in few separate commits. With mercurial I have to list all files to a commit command, which can be complicated. With Git I simply do add each desired file and then commit. In this case Git is perfectly usable from command line, while with Mercurial I'm forced to turn to GUI tool.
Edit: Mercurial seems to have alternatives to this, Mercurial Queues (MQ) extention. I only looked a bit at it's documentation: at first glance - more options, but harder to use. I think I'll stick with shelve/TortoiseHG for now (and I still like Git's way).
Status is status
Git status gives you more information than Mercurials. On Mercurial you get a list of new/modified/removed files. Git gives you this too, but it also tells you the current branch and a number of commits in it, that haven't yet been pushed. It will also tell you, if it is known, that your branch was rebased to some older revision. On Mercurial I have to execute multiple commands to get the same information (and I often do, especially hg outgoing).
Freedom locally
In Git you can commit and undo commits, there's nothing restricting you from doing that, except the repository you push to. You can do multiple commits locally and undo them, if you haven't pushed them yet. Mercurial puts restrictions on this, hopefully there are plugins that can help you.
Git often does less-like interactive output
In particular, if you do git diff or git log, the output often will exceed the number of lines in your console. Git automatically applies less style control on it, so you can browse the output. Mercurial just dumps everything to console, so if you do hg log without any arguments, the next thing you do is press Ctrl+C.
More power from git add
The git add has a nice feature, called patch mode. Not entirely easy thing to use, but can save you a lot of time on certain ocasions. It allows you to commit only some changes made to file, rather than all. I recommend to learn to use it for everyone, who uses Git.
git commit --amend lets you change the message of your last commit. Saves a lot of time, when you commit and suddenly realise you forgot to enter something like bug number into a commit. This also let you to add a file or change you missed.
Edit: Mercurial has Amend extention, but be careful with it, it not only let's you change commit message, but also adds (merges into) all pending changes to the last commit, so just changing commit message is not so simple.
A good word about Mercurial
Certain parts of Git has to be learned. In particular git rebase, where you need to understand, how it works, and to know what commands to execute, when you get conflicts. hg outgoing nicely tells you all the changes you're going to push from all branches. hg incomming does the opposite. I'm certain Mercurial has some more nice things that Git lacks.
Edit: Mercurial lets you pull or switch branch with pending changes
Initially writing this I missed the rather obvious feature in Mercurial - it let's you pull or switch branch, when you have pending changes. Git doesn't let you do that, it requires you to have a clean local directory first (all committed, no changes, except untracked files).
Bottom line
Although the above might look like criticism of Mercurial, this wasn't the intent. Both are great version control systems, far better than Subversion. It's up for everyone to choose. It simply learned Git first, read a lot on the internet about Mercurial having less features but being simpler, and finally started using Mercurial in my job. And I think this "Mercurial is simpler" is a nonsense: easier to learn - probably, easier to use every day - I really don't think so!

2013 m. sausio 21 d., pirmadienis

Fedora 18 impressions

For a few days already I'm using Fedora 18. I've not upgraded, I went for a fresh install instead, keeping the old /home.

Positive impressions:

  • New Anaconda installer. I was a bit confused at first with the manual disk partitioning, but when I figured it out, it's all clear now. Otherwise it's a much simpler installer. It also is fully Lithuanian (those hours I've pent translating it was not a waste)!
  • New Gnome message tray is much better, than the older. The only issue with it is applications, that show permanent icon in tray (like Rhythmbox), you have to get used to Super+M hotkey.
  • Gnome 3.6 seems to be faster than the older
Negative sides:
  • Gnome Shell is quite unstable for now, I had around one crash each day. Looking forward to updates (perhaps even the ones I intalled a couple hours ago)
  • Firefox also suffers some stability problems, not sure if it's related to same problems as Gnote Shell. But this gives me a better chance to give Epiphany a go, so far it seems faster than Firefox.
  • There's no ffmpeg plugin for GStreamer 1, only for 0.10. So I can't watch some movies using Totem. I've installed VLC as a temporary solution (I don't like it - UI is very uncomfortable).

2013 m. sausio 16 d., trečiadienis

How I'd fix Java

If I could change anything in Java, here's what I'd do:
  1. Introduce string primitive
  2. It would be just as the String class, except it would never be null, it would be empty string instead. How much boiler plate code does the null-checks cause for String. Autoboxing would work too.
    As for consistency, other primitimes could also allow called methods of their relevant class alternatives.
  3. Make generics array-like
  4. By introducing generics Java creators worked hard to ensure backward compatibility. And, IMHO, failed. Consider:
    • Array of Integer objects can be casted to array of Number objects, but not so collections.
    • If you insert something illegal to such downcasted array, you get appropriate exception.
    That's not the case for collections. The intent was clear:
    • Collections were not designed to throw exception due to incompatible type, as they were designed to always hold Object references. Changing that could break existing code.
    • Any down/up-casting was not permitted on generic collections for the same reason.
    However, I think this wasn't the right thing to do, because:
    • If you're not sure, what your legacy code inserts into raw collection, pass in the one, that hold Object and you'll get no issues.
    • If the legacy code is supposed to only insert certain type (say String), so pass in appropriate collection. In case the legacy code inserts something else, you get exception at exact place where the bug in legacy code lays, rather than get an ugly ClassCastException miles away in the new code.
    Finally, why not combine autobixing with collections, so that primitives could be stored in them? Well, I mean collections still hold references to wrappers, just you get autoboxing when putting/getting and not-null-ness is ensured.
  5. Allow public fields by JavaBeans specification
  6. Probably 90% of classes conforming with JavaBeans specification have only trivial setters and getters. I see no real reason, why public fields are not allowed and all this boiler plate code must be written.
  7. Enhanced for should be null-safe
  8. Why it's not called for-each, like in other languages?
    This type of for statement was introduced for readability, but having to check for null takes away half of that, because you have to wrap it in an if (you can avoid if by writing apropriate condition in traditional for).
    IMO this enhanced for should treat null collection as empty.
  9. Allow pass-by-reference or multi-value returns
  10. It really is nasty to write a new class just to return several values from method. I'd love to have something like C++ references or Pythons multi-value return for that.
  11. Replace java.io entirely
  12. Just admit - java.io is hopelessly overengineered. The most clumsy and least usable I/O system of all languages I've ever used. With this number of classes you'd expect to have stuff for all possible cases, but in reality there never is a class, that does what you need. You have to combine at least two of them in all cases. And if you need to read a simple text file, out of all there trouble all you get is readLine() - parse it yourself. How to fix it? Just make a clone of C++ iostream - with a couple of rough edges removed and couple extra classes (like for pipes and sockets) it would be nearly perfect solution.
  13. Package aliases or smallest unique distinction
  14. One thinkg I don't like about Java packages is the all-or-nothing approach with their names. In case you have 2 classes with same name, you have to use both of them with full package qualification. With standard reverse-domain naming scheme this means ugly long names. What I'd like is aliases, similar to C++ namespace aliases, which could look like:
    package alias = com.company.product.whatever.pkg;
    With this line alias.ClassName is identical to com.company.product.whatever.pkg.ClassName.
    Another solution could be to use the unique part of package name: move backwards from class name to any dot until you get uniques name. Perhaps whatever.pkg.ClassName is enough to identify class and you don't need to type the full ugly package name.