PBR is not really supposed to be lightmapped
it is supposed to react to its environment I think what we are supposed to have is deferred rendering,...
The defining characteristic of deferred rendering is that it essentially changes the complexity of scene rendering from O(geometry * lights) to O(geometry + lights).
This is achieved by first rendering the scene using shaders designed to output basic attributes such as (at a minimum) position*, normal, and diffuse color. Other attributes might include per-pixel specular values and other material properties. These are stored in full screen render targets, collectively known as the G-buffer.
(*: It's worth noting that developers will more commonly opt to store depth, and use that to reconstruct position, since having depth available is useful for so many other effects.)
Once the G-buffer has been generated, it's possible to compute a fully lit result for any pixel on the screen by solving the BRDF exactly once per pixel per light. In other words, if you have 20 meshes that each are affected by 20 lights, traditional ("forward") rendering would demand that you re-render each mesh several times in order to accumulate the result of each light affecting it.
In the worst case, this would be one draw call per mesh per light, or 400 total draw calls! For each of these draw calls, you're redundantly retransforming the vertices of the mesh.
There's also a good chance that you'll be shading pixels that aren't actually affected by the light, or won't be visible in the final result (because they'll be occluded by other geometry in the scene). Each of these results in wasted GPU resources.
Compare to deferred rendering: You only have to render the meshes once to populate the G-buffer. After that, for each light, you render a bounding shape that represents the extents of the light's influence. For a point light, this could be a small sphere, or for a directional light, it would be a full screen quad, since the entire scene is affected.
Then, when you're executing the pixel/fragment shader for that light's bounding volume, you read the geometry attributes from the appropriate position in the G-buffer textures, and use those values to determine the lighting result. Only the scene pixels that are visible in the final result are shaded, and they are shaded exactly once per light. This represents potentially huge savings.
However, it's not without drawbacks. It's a paradigm which is very difficult to extend to handle transparent geometry (google:
depth peeling). So difficult, in fact, that virtually all deferred rendering implementations fall back to forward rendering for the transparent portions of the scene. Deferred rendering also consumes a large amount of VRAM and frame buffer bandwidth, which leads to developers going to great lengths to cleverly pack and compress G-buffer attributes into the smallest/fewest components possible.
Welcome to the real world!
Main PC - Windows 10 Pro x64 - Core i7-7700K @4.2GHz - 32GB DDR4 RAM - GeForce GTX 1060-6G 6GB - 1TB NVe SSD
Test PC - Windows 10 Pro x64 - G4400 @3.3GHz - 16GB DDR3 RAM - GeForce GTX 950 2GB - 500GB SSD
Laptop - Helios 300 Predator - i7 7700HQ - 32GB - Nvidia GTX1060 6GB - 525GB M2 - 500 SSD - 17.3" IPS LED Panel - Windows 10 Pro x64