trying to write out its macro graph, in case we imported a module that added
another module macro between the most recent local definition and the end of
the module.
llvm-svn: 279024
This differs from the previous version by being more careful about template
instantiation/specialization in order to prevent errors when building with
clang -Werror. Specifically:
* begin is not defined in the template and is instead instantiated when Head
is. I think the warning when we don't do that is wrong (PR28815) but for now
at least do it this way to avoid the warning.
* Instead of performing template specializations in LLVM_INSTANTIATE_REGISTRY
instead provide a template definition then do explicit instantiation. No
compiler I've tried has problems with doing it the other way, but strictly
speaking it's not permitted by the C++ standard so better safe than sorry.
Original commit message:
Currently the Registry class contains the vestiges of a previous attempt to
allow plugins to be used on Windows without using BUILD_SHARED_LIBS, where a
plugin would have its own copy of a registry and export it to be imported by
the tool that's loading the plugin. This only works if the plugin is entirely
self-contained with the only interface between the plugin and tool being the
registry, and in particular this conflicts with how IR pass plugins work.
This patch changes things so that instead the add_node function of the registry
is exported by the tool and then imported by the plugin, which solves this
problem and also means that instead of every plugin having to export every
registry they use instead LLVM only has to export the add_node functions. This
allows plugins that use a registry to work on Windows if
LLVM_EXPORT_SYMBOLS_FOR_PLUGINS is used.
llvm-svn: 277806
This version has two fixes compared to the original:
* In Registry.h the template static members are instantiated before they are
used, as clang gives an error if you do it the other way around.
* The use of the Registry template in clang-tidy is updated in the same way as
has been done everywhere else.
Original commit message:
Currently the Registry class contains the vestiges of a previous attempt to
allow plugins to be used on Windows without using BUILD_SHARED_LIBS, where a
plugin would have its own copy of a registry and export it to be imported by
the tool that's loading the plugin. This only works if the plugin is entirely
self-contained with the only interface between the plugin and tool being the
registry, and in particular this conflicts with how IR pass plugins work.
This patch changes things so that instead the add_node function of the registry
is exported by the tool and then imported by the plugin, which solves this
problem and also means that instead of every plugin having to export every
registry they use instead LLVM only has to export the add_node functions. This
allows plugins that use a registry to work on Windows if
LLVM_EXPORT_SYMBOLS_FOR_PLUGINS is used.
llvm-svn: 276973
Currently the Registry class contains the vestiges of a previous attempt to
allow plugins to be used on Windows without using BUILD_SHARED_LIBS, where a
plugin would have its own copy of a registry and export it to be imported by
the tool that's loading the plugin. This only works if the plugin is entirely
self-contained with the only interface between the plugin and tool being the
registry, and in particular this conflicts with how IR pass plugins work.
This patch changes things so that instead the add_node function of the registry
is exported by the tool and then imported by the plugin, which solves this
problem and also means that instead of every plugin having to export every
registry they use instead LLVM only has to export the add_node functions. This
allows plugins that use a registry to work on Windows if
LLVM_EXPORT_SYMBOLS_FOR_PLUGINS is used.
Differential Revision: http://reviews.llvm.org/D21385
llvm-svn: 276856
option. Previously these options could both be used to specify that you were
compiling the implementation file of a module, with a different set of minor
bugs in each case.
This change removes -fmodule-implementation-of, and instead tracks a flag to
determine whether we're currently building a module. -fmodule-name now behaves
the same way that -fmodule-implementation-of previously did.
llvm-svn: 261372
Summary: It breaks the build for the ASTMatchers
Subscribers: klimek, cfe-commits
Differential Revision: http://reviews.llvm.org/D13893
llvm-svn: 250827
* adds -aux-triple option to specify target triple
* propagates aux target info to AST context and Preprocessor
* pulls in target specific preprocessor macros.
* pulls in target-specific builtins from aux target.
* sets appropriate host or device attribute on builtins.
Differential Revision: http://reviews.llvm.org/D12917
llvm-svn: 248299
So, iterate over the list of macros mentioned in modules, and make sure those
are in the master table.
This isn't particularly efficient, but hopefully it's something that isn't
done too often.
PR23929 and rdar://problem/21480635
llvm-svn: 240571
visibility is enabled) or leave and re-enter it, restore the macro and module
visibility state from last time we were in that submodule.
This allows mutually-#including header files to stand a chance at being
modularized with local visibility enabled.
llvm-svn: 237871
This, in preparation for the introduction of more new keywords in the
implementation of the C++ language, generalizes the support for future keyword
compat diagnostics (e.g., diag::warn_cxx11_keyword) by extending the
applicability of the relevant property in IdentifierTable with appropriate
renaming.
Patch by Hubert Tong!
llvm-svn: 237332
It has no place there; it's not a property of the Module, and it makes
restoring the visibility set when we leave a submodule more difficult.
llvm-svn: 236300
Modules builds fundamentally have a non-linear macro history. In the interest
of better source fidelity, represent the macro definition information
faithfully: we have a linear macro directive history within each module, and at
any point we have a unique "latest" local macro directive and a collection of
visible imported directives. This also removes the attendent complexity of
attempting to create a correct MacroDirective history (which we got wrong
in the general case).
No functionality change intended.
llvm-svn: 236176
the active module macros at the point of definition, rather than reconstructing
it from the macro history. No functionality change intended.
llvm-svn: 235941
Previously we'd defer this determination until writing the AST, which doesn't
allow us to use this information when building other submodules of the same
module. This change also allows us to use a uniform mechanism for writing
module macro records, independent of whether they are local or imported.
llvm-svn: 235614
in debugger mode) to accept @import declarations
and pass them to the debugger.
In the preprocessor, accept import declarations
if the debugger is enabled, but don't actually
load the module, just pass the import path on to
the preprocessor callbacks.
In the Objective-C parser, if it sees an import
declaration in statement context (usual for LLDB),
ignore it and return a NullStmt.
llvm-svn: 223855
rather than trying to extract this information from the FileEntry after the
fact.
This has a number of beneficial effects. For instance, diagnostic messages for
failed module builds give a path relative to the "module root" rather than an
absolute file path, and the contents of the module includes file is no longer
dependent on what files the including TU happened to inspect prior to
triggering the module build.
llvm-svn: 223095
Only those callers who are dynamically passing ownership should need the
3 argument form. Those accepting the default ("do pass ownership")
should do so explicitly with a unique_ptr now.
llvm-svn: 216614
Currently the analyzer lazily models some functions using 'BodyFarm',
which constructs a fake function implementation that the analyzer
can simulate that approximates the semantics of the function when
it is called. BodyFarm does this by constructing the AST for
such definitions on-the-fly. One strength of BodyFarm
is that all symbols and types referenced by synthesized function
bodies are contextual adapted to the containing translation unit.
The downside is that these ASTs are hardcoded in Clang's own
source code.
A more scalable model is to allow these models to be defined as source
code in separate "model" files and have the analyzer use those
definitions lazily when a function body is needed. Among other things,
it will allow more customization of the analyzer for specific APIs
and platforms.
This patch provides the initial infrastructure for this feature.
It extends BodyFarm to use an abstract API 'CodeInjector' that can be
used to synthesize function bodies. That 'CodeInjector' is
implemented using a new 'ModelInjector' in libFrontend, which lazily
parses a model file and injects the ASTs into the current translation
unit.
Models are currently found by specifying a 'model-path' as an
analyzer option; if no path is specified the CodeInjector is not
used, thus defaulting to the current behavior in the analyzer.
Models currently contain a single function definition, and can
be found by finding the file <function name>.model. This is an
initial starting point for something more rich, but it bootstraps
this feature for future evolution.
This patch was contributed by Gábor Horváth as part of his
Google Summer of Code project.
Some notes:
- This introduces the notion of a "model file" into
FrontendAction and the Preprocessor. This nomenclature
is specific to the static analyzer, but possibly could be
generalized. Essentially these are sources pulled in
exogenously from the principal translation.
Preprocessor gets a 'InitializeForModelFile' and
'FinalizeForModelFile' which could possibly be hoisted out
of Preprocessor if Preprocessor exposed a new API to
change the PragmaHandlers and some other internal pieces. This
can be revisited.
FrontendAction gets a 'isModelParsingAction()' predicate function
used to allow a new FrontendAction to recycle the Preprocessor
and ASTContext. This name could probably be made something
more general (i.e., not tied to 'model files') at the expense
of losing the intent of why it exists. This can be revisited.
- This is a moderate sized patch; it has gone through some amount of
offline code review. Most of the changes to the non-analyzer
parts are fairly small, and would make little sense without
the analyzer changes.
- Most of the analyzer changes are plumbing, with the interesting
behavior being introduced by ModelInjector.cpp and
ModelConsumer.cpp.
- The new functionality introduced by this change is off-by-default.
It requires an analyzer config option to enable.
llvm-svn: 216550
And in the process, discover that FileManager::removeStatCache had a
double-delete when removing an element from the middle of the list (at
the beginning or the end of the list, there was no problem) and add a
unit test to exercise the code path (which successfully crashed when run
(with modifications to match the old API) without this patch applied)
llvm-svn: 215388
Remove pointless MICache: it only ever contained up to 1 object, and was only
non-empty when recovering from an error. There's no performance or memory win
from maintaining this cache.
llvm-svn: 213825
MacroArgs are owned by TokenLexer, and when a TokenLexer is destroyed, it'll
call its MacroArgs's destroy() method. destroy() only appends the MacroArg to
Preprocessor's MacroArgCache list, and Preprocessor's destructor then calls
deallocate() on all MacroArgs in that list. This method then ends up freeing
the MacroArgs's memory.
In a code completion context, Parser::cutOffParsing() gets called when a code
completion token is hit, which changes the type of the current token to
tok::eof. eof tokens aren't always ConsumeToken()ed, so
Preprocessor::HandleEndOfFile() isn't always called, and that function is
responsible for popping the macro stack.
Due to this, Preprocessor::CurTokenLexer can be non-NULL when
~Preprocessor runs. It's a unique_ptr, so it ended up being destructed after
~Preprocessor completed, and its MacroArgs thus got added to the freelist after
the code freeing things on the freelist had already completed. The fix is to
explicitly call reset() before the freelist processing happens. (See the bug
for more notes.)
llvm-svn: 208438
The Preprocessor::Initialize() function already offers a clear interface to
achieve this, further reducing the confusing number of states a newly
constructed preprocessor can have.
llvm-svn: 207825
These features are new in VS 2013 and are necessary in order to layout
std::ostream correctly. Currently we have an ABI incompatibility when
self-hosting with the 2013 stdlib in our convertible_fwd_ostream wrapper
in gtest.
This change adds another implicit attribute, MSVtorDispAttr, because
implicit attributes are currently the best way to make sure the
information stays on class templates through instantiation.
Reviewers: majnemer
Differential Revision: http://llvm-reviews.chandlerc.com/D2746
llvm-svn: 201274
encodes the canonical rules for LLVM's style. I noticed this had drifted
quite a bit when cleaning up LLVM, so wanted to clean up Clang as well.
llvm-svn: 198686
module. Use the marker to diagnose cases where we try to transition between
submodules when not at the top level (most likely because a closing brace was
missing at the end of a header file, but is also possible if submodule headers
attempt to do something fundamentally non-modular, like our .def files).
llvm-svn: 195543
The preprocessor currently recognizes module declarations to load a
module based on seeing the 'import' keyword followed by an
identifier. This sequence is fairly unlikely in C (one would need a
type named 'import'), but is more common in Objective-C (where a
variable named 'import' can cause problems). Since import declarations
currently require a leading '@', recognize that in the preprocessor as
well. Fixes <rdar://problem/15084587>.
llvm-svn: 194225
Before this patch, Lex() would recurse whenever the current lexer changed (e.g.
upon entry into a macro). This patch turns the recursion into a loop: the
various lex routines now don't return a token when the current lexer changes,
and at the top level Preprocessor::Lex() now loops until it finds a token.
Normally, the recursion wouldn't end up being very deep, but the recursion depth
can explode in edge cases like a bunch of consecutive macros which expand to
nothing (like in the testcase test/Preprocessor/macro_expand_empty.c in this
patch).
<rdar://problem/14569770>
llvm-svn: 190980
For each macro directive (define, undefine, visibility) have a separate object that gets chained
to the macro directive history. This has several benefits:
-No need to mutate a MacroDirective when there is a undefine/visibility directive. Stuff like
PPMutationListener become unnecessary.
-No need to keep extra source locations for the undef/visibility locations for the define directive object
(which is the majority of the directives)
-Much easier to hide/unhide a section in the macro directive history.
-Easier to track the effects of the directives across different submodules.
llvm-svn: 178037
for the data specific to a macro definition (e.g. what the tokens are), and
MacroDirective class which encapsulates the changes to the "macro namespace"
(e.g. the location where the macro name became active, the location where it was undefined, etc.)
(A MacroDirective always points to a MacroInfo object.)
Usually a macro definition (MacroInfo) is where a macro name becomes active (MacroDirective) but
splitting the concepts allows us to better model the effect of modules to the macro namespace
(also as a bonus it allows better modeling of push_macro/pop_macro #pragmas).
Modules can have their own macro history, separate from the local (current translation unit)
macro history; MacroDirectives will be used to model the macro history (changes to macro namespace).
For example, if "@import A;" imports macro FOO, there will be a new local MacroDirective created
to indicate that "FOO" became active at the import location. Module "A" itself will contain another
MacroDirective in its macro history (at the point of the definition of FOO) and both MacroDirectives
will point to the same MacroInfo object.
Introducing the separation of macro concepts is the first part towards better modeling of module macros.
llvm-svn: 175585
Compilation always sets this explicitly, but creating a preprocessor
manually should still put the 'IsPreprocessedOutput' flag in a valid state.
llvm-svn: 174077
This is a missing piece for C99 conformance.
This patch handles UCNs by adding a '\\' case to LexTokenInternal and
LexIdentifier -- if we see a backslash, we tentatively try to read in a UCN.
If the UCN is not syntactically well-formed, we fall back to the old
treatment: a backslash followed by an identifier beginning with 'u' (or 'U').
Because the spelling of an identifier with UCNs still has the UCN in it, we
need to convert that to UTF-8 in Preprocessor::LookUpIdentifierInfo.
Of course, valid code that does *not* use UCNs will see only a very minimal
performance hit (checks after each identifier for non-ASCII characters,
checks when converting raw_identifiers to identifiers that they do not
contain UCNs, and checks when getting the spelling of an identifier that it
does not contain a UCN).
This patch also adds basic support for actual UTF-8 in the source. This is
treated almost exactly the same as UCNs except that we consider stray
Unicode characters to be mistakes and offer a fixit to remove them.
llvm-svn: 173369
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
PreprocessingRecord and into its own class, PPConditionalDirectiveRecord.
Decoupling allows a client to use the functionality of PPConditionalDirectiveRecord
without needing a PreprocessingRecord.
llvm-svn: 169229
common LexStringLiteral function. In doing so, some consistency problems have
been ironed out (e.g. where the first token in the string literal was lexed
with macro expansion, but subsequent ones were not) and also an erroneous
diagnostic has been corrected.
LexStringLiteral is complemented by a FinishLexStringLiteral function which
can be used in the situation where the first token of the string literal has
already been lexed.
llvm-svn: 168266
MacroInfo*. Instead of simply dumping an offset into the current file,
give each macro definition a proper ID with all of the standard
modules-remapping facilities. Additionally, when a macro is modified
in a subsequent AST file (e.g., #undef'ing a macro loaded from another
module or from a precompiled header), provide a macro update record
rather than rewriting the entire macro definition. This gives us
greater consistency with the way we handle declarations, and ties
together macro definitions much more cleanly.
Note that we're still not actually deserializing macro history (we
never were), but it's far easy to do properly now.
llvm-svn: 165560
* Retain comments in the AST
* Serialize/deserialize comments
* Find comments attached to a certain Decl
* Expose raw comment text and SourceRange via libclang
llvm-svn: 158771
r158085 added some logic to track predefined declarations. The main reason we
had predefined declarations in the input was because the __builtin_va_list
declarations were injected into the preprocessor input. As of r158592 we
explicitly build the __builtin_va_list declarations. Therefore the predefined
decl tracking is no longer needed.
llvm-svn: 158732
The preprocessor's handling of diagnostic push/pops is stateful, so
encountering pragmas during a re-parse causes problems. HTMLRewrite
already filters out normal # directives including #pragma, so it's
clear it's not expected to be interpreting pragmas in this mode.
This fix adds a flag to Preprocessor to explicitly disable pragmas.
The "right" fix might be to separate pragma lexing from pragma
parsing so that we can throw away pragmas like we do preprocessor
directives, but right now it's important to get the fix in.
Note that this has nothing to do with the "hack" of re-using the
input preprocessor in HTMLRewrite. Even if we someday copy the
preprocessor instead of re-using it, the copy would (and should) include
the diagnostic level tables and have the same problems.
llvm-svn: 158214