ClamAV can detect HTML links that look suspicious when the display text is a URL that is a different domain than than in the actual URL. Unfortunately, it is pretty common for a company to contract out web services and to use HTML link display text to make it look like it is a link to the company website. Because this practice is commonplace, ClamAV only does phishing checks for specific websites that are popularly targeted by phishing campaigns. Signatures to identify domains that should be monitored for phishing attempts are listed in ClamAV
PDB database files, such as
daily.pdb, a file found in the
Unfortunately, many websites listed in the
PDB phishing database also send emails with links that display a different domain than is in the actual link. To mitigate false positive detections in non-malicious links, ClamAV has allow list signatures in ClamAV
WDB database files, such as
daily.wdb, another file found in the
To help you identify what triggered a heuristic phishing alert,
clamd will print a message indicating the "Display URL" and "Real URL" involved in a heuristic phishing alert.
For example, suppose that
amazon.com were listed in ClamAV's loaded PDB database, you might observe this message before the alert when scanning an email with a link that claims to be for
https://www.amazon.com/ but is in fact linking to
LibClamAV info: Suspicious link found! LibClamAV info: Real URL: https://someshadywebsite.example.com LibClamAV info: Display URL: https://www.amazon.com /path/to/suspicious/email.eml: Heuristics.Phishing.Email.SpoofedDomain FOUND
Table of Contents
- Phishing Signatures
- Database file format
- PDB format
- GDB format
- WDB format
- Examples of PDB signatures
- Examples of WDB signatures
- Example for how the URL extractor works
- How matching works
- Simple patterns
- Regular expressions
- Introduction to regular expressions
- How to create database files
- Database file format
This file contains urls/hosts that should be monitored for phishing attempts. It contains lines in the following format:
Regular expression, for the concatenated URL. The last 3 characters of the regular expression cannot regex special characters and much be an exact match.
DisplayedHostnameas a simple pattern (literally, no regular expression).
The pattern can match either the full hostname.
Or a subdomain of the specified hostname.
To avoid false matches in case of subdomain matches, the engine checks that there is a dot(
.) or a space(
Is the URL the user is sent to, example:
hrefattribute of an html anchor (
Is the URL description displayed to the user, where its claimed they are sent, example: contents of an html anchor (
Is the hostname portion of the
An (optional) functionality level, 2 formats are possible:
minlevelall engines having functionality level
>= minlevelwill load this line.
minlevel-maxlevelengines with functionality level
>= minlevel, and
< maxlevelwill load this line.
This file contains URL hashes in the following format:
S:P:HostPrefix[:FuncLevelSpec] S:F:Sha256hash[:FuncLevelSpec] S1:P:HostPrefix[:FuncLevelSpec] S1:F:Sha256hash[:FuncLevelSpec] S2:P:HostPrefix[:FuncLevelSpec] S2:F:Sha256hash[:FuncLevelSpec] S:W:Sha256hash[:FuncLevelSpec]
These are hashes for Google Safe Browsing - malware sites, and should not be used for other purposes.
These are hashes for Google Safe Browsing - phishing sites, and should not be used for other purposes.
Hashes for blocking phishing sites. Virus name:
Locally allowed hashes.
4-byte prefix of the sha256 hash of the last 2 or 3 components of the hostname. If prefix doesn’t match, no further lookups are performed.
sha256 hash of the canonicalized URL, or a sha256 hash of its prefix/suffix according to the Google Safe Browsing “Performing Lookups” rules. There should be a corresponding
:P:HostkeyPrefixentry for the hash to be taken into consideration.
To see which hash/URL matched, look at the
clamscan --debug output, and look for the following strings:
Looking up hash,
prefix matched, and
Hash matched. To ignore .gdb entries, create a local.gdb file, and adding a line
This file contains url pairs for links that may look suspicious but are safe and should be allowed. It contains lines in the following format:
Regular expression, for the entire URL, not just the hostname.
The regular expression is by default anchored to start-of-line and end-of-line, as if you have used
/is automatically added both to the regex, and the input string to avoid false matches.
The regular expression matches the concatenation of the
RealURL, a colon(
:), and the
DisplayedURLas a single string. It doesn’t separately match
The last 3 characters of the regular expression cannot regex special characters and much be an exact match.
Matches hostname, or subdomain of it, see notes for H above.
Empty lines are ignored
The colons are mandatory
Don’t leave extra spaces on the end of a line!
If any of the lines don’t conform to this format, ClamAV will abort with a Malformed Database Error
See section Extraction-of-RealURL for more details on
To check for phishing mails that target
amazon.com, or subdomains of
To do the same, but for
Alternatively, you could use a regex PDB signature to check both:
You can limit the signatures to certain engine versions. For example...
Restrict so that engine versions 20 through 30 can load it, but not 31+:
Restrict so that engine versions
>= 20can load it:
Restrict so that engine versions
<= 20can load it:
In a real situation, you’d probably use the second form. A situation like that would be if you are using a feature of the signatures not available in earlier versions, or if earlier versions have bugs with your signature. Its neither case here, the above examples are for illustrative purposes only.
To allow Amazon’s country specific domains and
amazon.com, to mix domain names in
Explanation of this signature:
this is a regular expression
load signature only for engines with functionality level >= 17
The regular expression is the following (
:17- stripped, and a
Explanation of this regular expression (note that it is a single regular expression, and not 2 regular expressions splitted at the
any subdomain of
domain we are allowing (
country-domains: at, ca, co.uk, co.jp, de, fr
recomended way to end the real-url, this protects against embedded URLs (
DisplayedURLare concatenated via a
:, so match a literal
any subdomain of
recommended way to end displayed url part, to protect against embedded URLs
automatically added to further protect against embedded URLs
When you add an entry, make sure you check that both domains are owned by the same entity. This signature allows links claiming to point to amazon.com (
DisplayedURL), when in fact they really go to a country-specific domain of amazon (
Consider the following HTML file:
<html> <a href="http://1.realurl.example.com/"> 1.displayedurl.example.com </a> <a href="http://2.realurl.example.com"> 2 d<b>i<p>splayedurl.e</b>xa<i>mple.com </a> <a href="http://3.realurl.example.com"> 3.nested.example.com <a href="http://4.realurl.example.com"> 4.displayedurl.example.com </a> </a> <form action="http://5.realurl.example.com"> sometext <img src="http://5.displayedurl.example.com/img0.gif"/> <a href="http://5.form.nested.displayedurl.example.com"> 5.form.nested.link-displayedurl.example.com </a> </form> <a href="http://6.realurl.example.com"> 6.displ <img src="6.displayedurl.example.com/img1.gif"/> ayedurl.example.com </a> <a href="http://7.realurl.example.com"> <iframe src="http://7.displayedurl.example.com"> </a>
The phishing engine extract the following
DisplayedURL pairs from it:
http://1.realurl.example.com/ 1.displayedurl.example.com http://2.realurl.example.com 2displayedurl.example.com http://3.realurl.example.com 3.nested.example.com http://4.realurl.example.com 4.displayedurl.example.com http://5.realurl.example.com http://5.displayedurl.example.com/img0.gif http://5.realurl.example.com http://5.form.nested.displayedurl.example.com http://5.form.nested.displayedurl.example.com 5.form.nested.link-displayedurl.example.com http://6.realurl.example.com 6.displayedurl.example.com http://6.realurl.example.com 6.displayedurl.example.com/img1.gif
The phishing detection module processes pairs of RealURL/DisplayedURL. Matching against
daily.wdb is done as follows: the RealURL is concatenated with a
:, and with the DisplayedURL, then that line is matched against the lines in
So if you have this line in
<a href='http://www.google.ro'>www.google.com</a> then it will be allowed, but:
<a href='http://images.google.com'>www.google.com</a> will not.
In the case of the allow list, a match means that the RealURL/DisplayedURL combination is considered clean, and no further checks are performed on it.
In the case of the domain list, a match means that the RealURL/DisplayedURL is going to be checked for phishing attempts.
Furthermore you can restrict what checks are to be performed by specifying the 3-digit hexnumber.
The html parser extracts pairs of
DisplayedURL based on the following rules.
After URLs have been extracted, they are normalized, and cut after the hostname.
is the tag-stripped contents of the
<a>tags, so for example
<b>tags are stripped (but not their contents)
<a>tag withing an
<a>tag (besides being invalid html) is treated as a
actionattribute is the
RealURL, and a nested
<a>tag is the
if nested within an
<a>tag, the RealURL is the
hrefof the a tag, and the
if nested withing a
formtag, then the action attribute of the
formtag is the
if nested withing an
srcattribute is the
DisplayedURL, and the
hrefof its parent
atag is the
if nested withing a
formtag, then the action attribute of the
formtag is the
Consider this html file:
<a href=”evilurl”>www.paypal.com</a> <a href=”evilurl2” title=”www.ebay.com”>click here to sign in</a> <form action=”evilurl_form”> Please sign in to <a href=”cgi.ebay.com”>Ebay</a>using this form <input type=’text’ name=’username’>Username</input> .... </form> <a href=”evilurl”><img src=”images.paypal.com/secure.jpg”></a>
DisplayedURL pairs will be (note that one tag can generate multiple pairs):
click here to sign in
Simple patterns are matched literally, i.e. if you say:
it is going to match www.google.com, and only that. The . (dot) character has no special meaning (see the section on regexes [sec:Regular-expressions] for how the .(dot) character behaves there)
POSIX regular expressions are supported, and you can consider that internally it is wrapped by ^, and $. In other words, this means that the regular expression has to match the entire concatenated (see section RealURL,-DisplayedURL-concatenation for details on concatenation) url.
It is recomended that you read section Introduction-to-regular to learn how to write regular expressions, and then come back and read this for hints.
Be advised that clamav contains an internal, very basic regex matcher to reduce the load on the regex matching core. Thus it is recomended that you avoid using regex syntax not supported by it at the very beginning of regexes (at least the first few characters).
Currently the clamav regex matcher supports:
\(escaping special characters)
()(parenthesis for grouping, but no group extraction is performed)
other non-special characters
Thus the following are not supported:
other “advanced” features not listed in the supported list ;)
This however shouldn’t discourage you from using the “not directly supported features “, because if the internal engine encounters unsupported syntax, it passes it on to the POSIX regex core (beginning from the first unsupported token, everything before that is still processed by the internal matcher). An example might make this more clear:
(\[a-zA-Z\])+ is processed internally, that parenthesis (and everything beyond) is processed by the posix core.
Examples of url pairs that match:
Example of url pairs that don’t match:
Flags are a binary OR of the following numbers:
The names of the constants are self-explanatory.
These constants are defined in
libclamav/phishcheck.h, you can check there for the latest flags.
There is a default set of flags that are enabled, these are currently:
( CLEANUP_URL | CHECK_SSL | CHECK_CLOAKING | CHECK_IMG_URL )
ssl checking is performed only for a tags currently.
You must decide for each line in the domain list if you want to filter any flags (that is you don’t want certain checks to be done), and then calculate the binary OR of those constants, and then convert it into a 3-digit hexnumber. For example you devide that domain_sufficient shouldn’t be used for ebay.com, and you don’t want to check images either, so you come up with this flag number:
2|256 => 258(decimal) => 102(hexadecimal)
So you add this line to
R102 www.ebay.com .+
regex quick start: http://www.regular-expressions.info/quickstart.html
regex tutorial: http://www.regular-expressions.info/tutorial.html
regex(7) man-page: https://linux.die.net/man/7/regex
the opening square bracket - it marks the beginning of a character class, see sectionCharacter-classes
the backslash - escapes special characters, see section Escaping
the caret - matches the beginning of a line (not needed in clamav regexes, this is implied)
the dollar sign - matches the end of a line (not needed in clamav regexes, this is implied)
the period or dot - matches any character
the vertical bar or pipe symbol - matches either of the token on its left and right side, see Alternation
the question mark - matches optionally the left-side token, see Optional-matching, and Repetition
the asterisk or star - matches 0 or more occurences of the left-side token, see Optional-matching, and Repetition
the plus sign - matches 1 or more occurences of the left-side token, see Optional-matching, and Repetition
the opening round bracket - marks beginning of a group, see section Groups
the closing round bracket - marks end of a group, see sectionGroups
Escaping has two purposes:
It allows you to actually match the special characters themselves, for example to match the literal
+, you would write
It also allows you to match non-printable characters, such as the tab (
\t) and newline (
However since non-printable characters are not valid inside an url, you won’t have a reason to use them.
Groups are usually used together with repetition, or alternation. For example:
(com|it)+ means: match 1 or more repetitions of
it, that is it matches:
ititcom,... you get the idea.
Groups can also be used to extract substring, but this is not supported by the ClamAV, and not needed either in this case.
If the phishing code claims that a certain mail is phishing, but it's not, you have 2 choices:
Examine your rules
daily.pdb, and fix them if necessary (see: How to create database files)
Add it to the allow list (discussed here)
Lets assume you are having problems because of links like this in a mail:
<a href=''http://18.104.22.168/bCentral/L.asp?L=XXXXXXXX''>` http://www.bcentral.it/` </a>`
After investigating those sites further, you decide they are no threat, and create a line like this in daily.wdb:
R http://www\(\backslash\).bcentral\(\backslash\).it/.+ http://69\(\backslash\).0\(\backslash\).241\(\backslash\).57/bCentral/L\(\backslash\).asp?L=.+
Note: urls like the above can be used to track unique mail recipients, and thus know if somebody actually reads mails (so they can send more spam). However since this site required no authentication information, it is safe from a phishing point of view.
When not using
--phish-scan-alldomains (production environments for example), you need to decide which urls you are going to check.
Although at a first glance it might seem a good idea to check everything, it would produce false positives. Particularly newsletters, ads, etc. are likely to use URLs that look like phishing attempts.
Lets assume that you’ve recently seen many phishing attempts claiming they come from Paypal. Thus you need to add paypal to daily.pdb:
R .+ .+\(\backslash\).paypal\(\backslash\).com
The above line will block (detect as phishing) mails that contain urls that claim to lead to paypal, but they don’t in fact.
Be careful not to create regexes that match a too broad range of urls though.
Whenever you see a false positive (mail that is detected as phishing, but its not), ou might need to modify
daily.pdb (if one of yours rules in there are too broad), or you need to add the url to
daily.wdb. If you think the algorithm is incorrect, please file a bug report.
If the mail is detected, if yes, then you need to add an appropriate line to
daily.pdb (see How to create database files).
If the mail is not detected, then try using:
$clamscan/clamscan --debug undetected.eml | less
Then see what urls are being checked, see if any of them is in an allow list, see if all urls are detected, etc.