Note: This website is archived. For up-to-date information about D projects and development, please visit

Descent: Action Plan

While finishing the last release of Descent, I realized I did something wrong, for which I, at that moment, couldn't find a solution.

How Descent works

Descent performs semantic analysis on modules in order to offer autocompletion, go to definition, etc. How does it do it? When performing semantic analysis of a module, Descent assumes the other modules are already semantically resolved. That is, only active version/debug blocks are in them, and type resolution was already done. If this is not the case, semantic analysis is performed on those modules, and so on. The results of the semantic analysis is stored in memory (in an LRU cache). This is a top-level structure of (active) symbols in a module with resolved types information.

When semantic analysis is started on a module, it is said that it's structure is already known.

This presents some problems:

  • Every project must be using the same version/debug identifiers, because resolved information (where inactive symbols were discarded) is in memory for those. If the user wishes to use different configurations for each project, she can't. If the user changes the configuration, semantic resolution in memory must be dropped and reconstructed.
  • If a module A depends on B, which in turns depends on A, semantic analysis will not be done correctly. This is because for B, A's structure is already known, so it reports an empty module (because A is currently building it, to avoid infinite recursion). This, along with other unrelated problems (dirty code) is what causes Descent to malfunction in some cases.

A solution

To explain what's a solution for this, let's take a look at JDT. JDT also stores the top-level structure (types, fields, methods) of a compilation unit in memory, but without performing semantic analysis. Each time you modify a compilation unit, semantic analysis is performed for it, and types are looked up in this in-memory structure. If further information is needed, like members of a type that is returned by a method, semantic analysis is run for those, but only for the needed bits. This could have a bad performance, but because of Java's lookup rules and because JDT almost forces you to write qualified type imports without worry, performance is ok.

This is also the solution for Descent. Why didn't I do it in the first place? Because I didn't know much how JDT worked, and I learnt little by little these while writing Descent, and I discovered this unexpected behaviour after releasing 0.5.

In this way, the first problem is solved because the whole unresolved structure of a module is in memory, no version/debug blocks are discarded. The second problem is also solved because now structure building never leads to recursion since only unresolved information is stored. When semantic analysis is performed (when you open a file for editing), even though there may be recursion, DMD's semantic analysis takes care of this (structure building and in-memory storing is specifical to Descent... while recursion could be avoided, ported clases from JDT should be heavily changed, and we don't want to do this in order to also recieve upgrades from JDT).

There's a problem, though. D's compile-time capabilities highly surpasses Java's, (which are, visibly, none). Types can be created in compile-time, as well as methods. Templates are evaluated in compile-time, so their body is sometimes needed. Functions can be evaluated in compile-time that return strings which are used in mixins, which finally define new types and functions, so function bodies may also be needed. So if Descent wants to follow JDT's approach, it can't do semantic analysis only on the needed bits. It must fully do it for involved modules, and just then, begin symbol lookup, type resolution, etc. Even if selective imports are used, the selected symbols may be compile-time generated. So performance, in comparison with JDT, should be relatively bad.

Another problem is the following. JDT shows you types (which we'll add imports autocomoatically) to use by using a search-index. This search-index stores unresolved information. If this is done in Descent, every symbol will be indexed, regardles of whether for the current settings of a project those are active or inactive. So Descent could potentially suggest you a type or method to autocomplete, which then will not be found in semantic analysis, thus reporting an error. But this is a minor problem.


All this explanation, especially the before-last paragraph, leads to this conclusion: making a (full-blown, smart, like JDT-smart) IDE for D is hard, specially because of symbols being generated at compile time. Well, unless compile-time generated information is ignored by the IDE; but when a IDE doesn't show you exactly what you can do, you start to doubt whether you or the IDE made a mistake, and then you go to the source code to see if that symbol was really there... which is what an IDE tries you to avoid.

Now compile-time manipulation is bad for two reasons: it's hard (if possible) to debug, and it makes writing an IDE harder. But nobody can doubt it's power (in fact, I'm currently facing an implementation problem in Java where I wish I could use template mixins).

Note that languages that have full-blown IDE support doesn't generate symbols at compile-time (mainly C++, Java, .Net). Well, those are also supported by huge entreprises, but that's another issue.

If anyone comes with another solution for this problem, please tell me. Else, I'll go with this approach and see how performance does (while also doing some house keeping). Features will come next, although the builder is a totally unrelated project so it may get a 0.5.1 + builder release.