In general, a binary code can represent anything, so it can only be interpreted in context.
For example, a 2-bit binary code could be
00 - Chicago
01 - Los Angeles
10 - New York City
11 - Houston
This would only work for a business that has 4 offices and doesn't expect to expand. However, if they ever did expand, they'd have to add a 3rd bit to the code. Every bit that gets added would double the number of possibilities.
Those bits would usually be transparent to the user. Externally, those 4 codes would probably be represent as 0, 1, 2, 3 (or 1, 2, 3, 4) -- depending how the computer program was written.
Binary codes are used because computer circuitry is designed to work with binary codes. The most common use is when a code simply represents an integer that can be used for arithmetic calculations. Most typically, a 32-bit code is used to represent a number that can be as large as 9 decimal digits. And the format of the code is analogous to how we represent number in decimal. In other words, the rightmost digit has the least significance and the place value goes up as you shift to the left.
The other common code is used for representing alphabetic characters and other symbols. A code with a numeric value of 65 is interpreted as an upper-case "A". Someone cannot just look at the code and know if it is the number 65 or the letter "A". You'd have to know how the item was coded -- generally by looking at documentation.
Or, binary code could be referring to a machine language program, often the result of compiling a program that was written in a traditional programming language. The machine language program is suitable for execution by hardware.