CSP is a great invention, but it can still be implemented poorly and not give its purposed protection. Even more can CSP protect against more than just XSS if you customize the policy accordingly to your websites attack surface.
CSP in meta-tags as fallback
I've been writing about this before in my "I don't trust your browser" article and I've also been using this method in production for quite some time now. This is a debatable method and it has it drawbacks.
If you use a CSP in
<meta> and via HTTP-headers you will be "double protected". Even if an attacker have FULL rights to modify the CSP in the
<meta> - tag (possible), the HTTP-header will not be overwritten. This means that even if an attacker can remove, or even change values for the CSP in
<meta> - tags, the CSP will still need to be validated against the one in the header.
sandbox directives are not supported via meta-tags.
Use data: instead of 'self' for img-src
If you have a few static images you should consider using the
data: scheme instead of
'self' says that all subpaths on the origin is whitelisted, the
data: scheme only works in the current context.
This could protect if an attacker can control the URL in an image-tag that would make requests based on the clients behalf. Attacks such as internal site forgery and open redirects will be prevented because the attacker won't be able to insert URL's in the image-tag. Example:
<img src="./user/settings/logout"/> is not possible because it does not use the
data: scheme and a scheme can't point to a subpath nor file.
Do not use the
data: scheme in any other directives.
Full URI in white-list
Let's say that you have this CSP;
script-src 'self' https://cdn.example.com/js/api/script.js
Define protocol and port for domain
If you have external domains in your CSP, you should also specify the scheme and port, for example;
script-src 'self' https://cdn.example.com:443/js/api/script.js
This could protect against port-hijacking attacks where an attacker can control a different port on the host, for example cdn.example.com:8080 that serves different content. This could also protect if an attacker can inject the
Alt-Svc header into the domain and tell the user to use another protocol and/or port that the attacker may have control over (highly unlikely).
Avoid wildcards (*)
Using wildcards is not a good idea if you have no other option, for instance if you rely on random subdomains (like Gravatar). Rather whitelist 5 domains than using a wildcard. But there's a limit and your header shouldn't be too big.
By avoiding wildcards you are reducing the attack surface greatly. This give the attacker less content to use if it was possible to use resources from the white-listed domain.
As the name states - it really should be considered as unsafe. It's hard to move all the event-handlers and put all the inline style/js into files, but it's really worth it. If you can't move scripts, then you should consider using
nonce if the inline script is static.
Strict policy on pages that aren't allowed
You should have a global policy. I mean totally global. For instance, see this example:
~> curl -siI foobar.baz HTTP/1.1 302 Moved Temporarily Content-Type: text/html Content-Length: 161 Connection: keep-alive Location: https://www.foobar.baz/
If a client for some strange reson wouldn't be redirected to the HTTPS-site, the client would not have any CSP. This could be the case if you're not HSTS-preloaded or the client does not have the domain in its HSTS-cache.
Instead, block as much as you can on pages/ports/protocols if they are not meant to be accessed by a client, like:
~> curl -siI foobar.baz | grep Security Content-Security-Policy: form-action 'none'; default-src 'none'; base-uri 'none'; referrer no-referrer; reflected-xss block Location: https://www.foobar.baz/
There are a few examples of this were I've seen that a global CSP would help. For instance on a debug-page that was not supposed to be accessible, but it had no CSP and was vulnerable for XSS.
I've also seen this on custom pages for 50x-errors where absolutely no policy nor other security headers were present. However, even with no vulnerabilities on those pages, clickjacking was still possible due to no
Always default-src to 'none'
If you don't set any
-src - directives, the
default-src will be used as fallback. Therefore should you always start with
default-src and then work on the rest of the
This tip is recommended to all, even for those who just started with CSP. It's easy to screw up and forget about directives and then you fallback to
Use a referrer policy
CSP does have a
referrer - directive but it will soon be deprecated. But you should have it in your CSP a little longer. The
Referrer-Policy header will replace the directive. I recommend using
origin-when-cross-origin but rather the latter.
This could protect against some open redirects (Oauth ohoh), information leakage or even content injection (if you can inject stuff in the referrer and then use it in order to execute code).
Use dynamic CSP
Dynamic CSP can reduce your websites attack surface hugely! If you only need a script for a few pages, only those should have access to the script. But I have written about dynamic CSP before;
I also have my NginX-configuration file public if you want to see how my dynamic CSP looks; https://github.com/intchloe/swehack/blob/master/nginx/swehack.org.conf#L100
CSP do have the potential to protect against lot of vulnerabilities but it does also give the author a lot of room. CSP bypasses have been a thing in the past and it will most certainly be again.