Assembly language programming vs Other programming languages

In summary: As an example, my boss was a fantastic assembly code programmer. He would do things like reuse the area for program initialization as a buffer for reading in data from files. It saved memory to do this the expense of not being able to easily debug the program because a section of code has been overwritten with data. His rationale was that you wouldn't necessarily need to see that code since it was already run and no longer needed.As part of initialization, he would setup a long string of load and store commands to build a list of numbers:lda #1sta tbl+0lda #2sta tbl+1...When I saw the code, I told him I could make it more compact using a loop and he said I should
  • #36
I write entire apps and drivers in all assembly (currently under Windows), and have done so for more than three decades. It continues today. Unless somebody else replying to this thread has done the same in the past, say, year, they are reciting their conditioning and they are not speaking from experience.

https://www.codeproject.com/Articles/1182724/Blowing-the-Doors-Off-D-Math-Part-I-Matrix-Multipl

The above link (yes it's the complete link) is an article I posted on April 17, 2017 (last updated April 19).

If the time frame involved in writing an all-assembly app were truly in the trillions of millennia, as represented, I could not not do what I do. However I do what I do, therefore it stands to reason that the time frame involved is just not that long.

I could write a novel on the benefits of using all-assembly. Things do what you tell them to do, when you tell them to do it. Just about everything is within reach. Nothing moves unless it's told to move. When one forgets that conditioning is just the emotional opinions of others, ASM becomes impossible. When one let's the task stand or fall on its own merit, the task becomes quite doable.
 
  • Like
Likes ChrisVer and Buffu
Technology news on Phys.org
  • #37
CMalcheski said:
I write entire apps and drivers in all assembly (currently under Windows), and have done so for more than three decades. It continues today. Unless somebody else replying to this thread has done the same in the past, say, year, they are reciting their conditioning and they are not speaking from experience.

https://www.codeproject.com/Articles/1182724/Blowing-the-Doors-Off-D-Math-Part-I-Matrix-Multipl

The above link (yes it's the complete link) is an article I posted on April 17, 2017 (last updated April 19).

If the time frame involved in writing an all-assembly app were truly in the trillions of millennia, as represented, I could not not do what I do. However I do what I do, therefore it stands to reason that the time frame involved is just not that long.

I could write a novel on the benefits of using all-assembly. Things do what you tell them to do, when you tell them to do it. Just about everything is within reach. Nothing moves unless it's told to move. When one forgets that conditioning is just the emotional opinions of others, ASM becomes impossible. When one let's the task stand or fall on its own merit, the task becomes quite doable.

If I tell you to program a software in ,say, Java and Assembly, how much more time will it take than Java ?
 
Last edited:
  • #38
It's really not feasible to give a comprehensive answer to such a question. First, how large is the app? Second, what is it doing? Third, who is doing the coding?

Having used ASM exclusively for decades, I have a very comprehensive personal library of common functions that I use - something a person writing their first app would not have access to. I've accumulated a lot of knowledge of Windows peculiarities that would take years and years to accumulate.

Every job listing online pushes experience and education - which, curiously, ranked 9th and dead last, respectively, in a list of factors that predict employee performance. The point is, the more you repeat something, the faster you're likely to get at it.

In writing the article I referenced, I tried running a timing test in VS2017. The most basic function, RDTSC(), crapped out (bugs) on the second call. It returned all 0's. The first time it was called, it worked fine. I could have sat there for half an hour fighting it, which only would have led me to tracing through it at the disassembly level anyway. But it failed to operate the way it should operate. There is no excuse for it. None. And it would have been a major ordeal to figure out why it wasn't behaving.
It was hardly unique. It was the norm among high level languages: didn't work as advertised, and the developer has to find a workaround. In ASM, I don't have such problems. I say RDTSC as an ASM instruction and it gives me the data I requested.

I calculated long ago that 95% of my time developing went to handling Windows bugs and missing or incorrect documentation. That number would only have gone higher if I had used anything but ASM.

You probably wouldn't compare the performance of two NASCAR cars by putting a newbie driver in one and an old pro in another. So there are many caveats and variables in answering the question directly.
 
  • #39
CMalcheski said:
You probably wouldn't compare the performance of two NASCAR cars by putting a newbie driver in one and an old pro in another. So there are many caveats and variables in answering the question directly.
hm as far as I understood, you are put both in Assembly and Java... not like a Java-expert vs an assembly-newbie (or the other way around)...

Looking at your code for the matrix multiplication, I think the main problem would come by giving that code to someone else to inspect... of course I think everyone has admitted that assembly can outmatch the higher-level stuff in time performance, but I think it can become quiet terrible if for example you want to write a very complicated software - if you needed ~5 pages of code (repeatitive in most of its part) for a matrix multiplication... I think for a bigger project someone would need a book, which would make handling problems impossible? (just making thoughts here, I was fascinated by the anti-laziness, but maybe we shouldn't go to the other side and be impractical)
 
  • Like
Likes FactChecker
  • #40
Let me chime in :rolleyes:
1. The PC is not all the programming that exists. I work in a company where programming is 90% for embedded systems. A colleague of mine wrote a multitasking OS for ARM, mostly for fun. Most of it is C++ but it would be impossible without some knowledge of ARM's assembly.
Also there are processors in sensors etc. that need to save power, or perform as many measurements as possible on a processor as cheap as possible. We use a lot of C but there are always a few assembly routines, mostly to control the processor's input, outputs and sleep modes, or add/multiply long numbers.

2. If you use C++ or C, you'll eventually end up in debugger's disassembly view. That is the most common place where a typical programmer will meet assembly. Obviously it can't happen if you work in Java or ActionScript or whatever.

3. Assembly is perfectly fine for small and time-critical tasks, but I wouldn't want to write a text editor or a spreadsheet in it. Render a letter on the screen? No problem.
This opinion may be a result of mental conditioning, but it's planted pretty firmly in me.
 
  • Like
Likes FactChecker
  • #41
CMalcheski said:
I write entire apps and drivers in all assembly (currently under Windows), and have done so for more than three decades. It continues today. Unless somebody else replying to this thread has done the same in the past, say, year, they are reciting their conditioning and they are not speaking from experience.
Most people here are speaking from experience. It's just a different kind of experience. You have over 3 decades of experience? Fine. I have over 4. Many people here have a lot more than me.

Many programs would be inconceivable to do in assembly code. They are not just hundreds of thousands of lines of high-level code -- they are hundreds of thousands of FILES of high level source code.
 
Last edited:
  • #42
ChrisVer said:
hm as far as I understood, you are put both in Assembly and Java... not like a Java-expert vs an assembly-newbie (or the other way around)...

Looking at your code for the matrix multiplication, I think the main problem would come by giving that code to someone else to inspect... of course I think everyone has admitted that assembly can outmatch the higher-level stuff in time performance, but I think it can become quiet terrible if for example you want to write a very complicated software - if you needed ~5 pages of code (repeatitive in most of its part) for a matrix multiplication... I think for a bigger project someone would need a book, which would make handling problems impossible? (just making thoughts here, I was fascinated by the anti-laziness, but maybe we shouldn't go to the other side and be impractical)
I agree, with one caveat. If speed is desired, any code would be significantly scrambled by a high level, global optimizing compiler. IMO, an assembly language programmer would find it practically impossible to compete on a large program.
 
  • #43
I used to write code for a radio telescope control system in machine language. The problem was that the program had to run synchronously with both the antenna pointing system and incoming data. And I mean synchronously not interrupt driven. That's hard to do with a compiled language. Compilers just don't give you that level of control over the process. However, modern compiler languages and high speed computers make this unnecessary except for the most basic of applications. Drivers and embedded systems are probably exceptions.
 
  • Like
Likes FactChecker
  • #44
FactChecker said:
Most people here are speaking from experience. It's just a different kind of experience. You have over 3 decades of experience? Fine. I have over 4. Many people here have a lot more than me.

Many programs would be inconceivable to do in assembly code. They are not just hundreds of thousands of lines of high-level code -- they are hundreds of thousands of FILES of high level source code.

Hundreds of thousands of files? I'd say there's some room serious for improvement there. I've noticed that hours to compile one app is not uncommon in high level languages. For anything I create to require more than ten seconds to compile is exceedingly rare.

As for experience ... writing all-assembly apps and nothing else? Who employs such people?
 
  • Like
Likes FactChecker
  • #45
I don't do Java. Never have, never will.
 
  • #46
SlowThinker said:
Let me chime in :rolleyes:
1. The PC is not all the programming that exists. I work in a company where programming is 90% for embedded systems. A colleague of mine wrote a multitasking OS for ARM, mostly for fun. Most of it is C++ but it would be impossible without some knowledge of ARM's assembly.
Also there are processors in sensors etc. that need to save power, or perform as many measurements as possible on a processor as cheap as possible. We use a lot of C but there are always a few assembly routines, mostly to control the processor's input, outputs and sleep modes, or add/multiply long numbers.

2. If you use C++ or C, you'll eventually end up in debugger's disassembly view. That is the most common place where a typical programmer will meet assembly. Obviously it can't happen if you work in Java or ActionScript or whatever.

3. Assembly is perfectly fine for small and time-critical tasks, but I wouldn't want to write a text editor or a spreadsheet in it. Render a letter on the screen? No problem.
This opinion may be a result of mental conditioning, but it's planted pretty firmly in me.

The impression I get (which may or may not be anywhere near correct) is that ARM processors have taken over the universe except for the PC and nothing else is used in any device except desktops and servers.
 
  • #47
I was at an interesting lecture quite a few years ago, where a very convincing case was made to say that on modern hardware, a well written optimising compiler designed to support that specific hardware, will generally produce faster code than all but the most skilled assembler programmers because of the knowledge built into to the compiler to take maximum advantage of specialist and proprietary hardware features - examples were given around caching and pipelining. Assuming of course that the high-level program is itself written in an efficient way!

Few assembler programmers would be likely to have that level of intimate awareness of their specific hardware - and of course, if they then ported their software onto different hardware, it may no longer run so efficiently - whereas being recompiled with a compiler appropriate to the new hardware would retain the performance.

I've never tested it - but seems plausible to me.
 
  • Like
Likes Merlin3189 and FactChecker
  • #48
I have never seen nor heard of an actual app ever being ported. I'm sure it happens; I just don't see where. And I view the entire process as the ultimate in shoddy shortcuts and laziness. Just my personal opinion; we all have one.

By the logic of "portability first, quality second," automobile engines should be made universal (be all things to all vehicles) because one might one day need to be ported over to the Space Shuttle and God forbid they should have to actually do some work and build a new engine suitable to the Shuttle. Oh wait, they use custom engines for the Shuttle too. Who'da thunk, the world keeps turning!
 
  • #49
CMalcheski said:
Hundreds of thousands of files? I'd say there's some room serious for improvement there. I've noticed that hours to compile one app is not uncommon in high level languages. For anything I create to require more than ten seconds to compile is exceedingly rare.
Ha! I admit that is not typical of programming. These were projects with hundreds of programmers, running on networks of computers. Even if they knew how to improve it, the refactoring and rewrite could never be funded. Although projects of that size are not frequent, I think they have a great influence on programming practices and state of the art.
As for experience ... writing all-assembly apps and nothing else? Who employs such people?
I have the same question.
 
  • #50
Actually the demand for such people is considerable, although still niche - as long as you have an active $20k security clearance, which no company will pay for because without fluffy credentials (5+ years of "experience" in each of 916 languages and a b.s. [we all know what b.s. stands for] degree), then a candidate obviously cannot program. Ability is irrelevant. How much education do you have? How much experience? To find a company that hasn't been seduced by this blind dogmatic addiction is to find pure gold.

One relatively recent study ranked experience and education ninth and dead last in a list of twelve factors that predict employee performance. And so the demand for experience and "education" becomes more and more deeply embedded each day. It's very similar to realtors receiving 90% of their business from referrals, and in response putting 90% of their ad budget into print advertising. Or very close to every major invention in history being attributable to a single individual, and so 100% of all charity, research, grant, etc. dollars go to large groups.

Looking for sanity in the human race might well be an exercise in futility. Isn't there an app for that?
 
  • #51
CMalcheski said:
I have never seen nor heard of an actual app ever being ported
I have been out of the industry for a while, but early in my career I was coding commercial data processing applications in 100% IBM assembler! Seems surreal to think of it these days. This software was 'ported' every time there was an upgrade to a new processor version even if nominally the same kind of machine.

I imagine the same would be true of assembler components in more recent systems, unless all such are hardware-specific drivers and such. So not 'porting' as in moving from one manufacturer to another maybe - but porting from one hardware version to another more recent one with different behaviour? Seems likely?
 
  • #52
"Porting" to upgraded hardware ... absolutely. I just never thought of it as "porting," in the commonly-accepted sense of the word, but really that's exactly what it is.

Ultimately everybody finds their niche and does the work they are most drawn to doing. Some like me are wretched outcasts, condescendingly viewed as idiots who just don't get it, but I'm happy with what I write and that's what matters most to me.
 
  • #53
Assembly language was useful in the past to squeeze every last bit of performance out of a single processor, strictly sequential system. Nowadays, it isn't as relevant since performance programming has moved on to multicore systems with parallel computing and perhaps GPU computing. Assembly language is basically a list of sequential instructions for a CPU, but in modern systems, instruction handling isn't sequential or necessarily synchronous.
 
  • Like
Likes ChrisVer
  • #54
Assemble language could have been by far the best, but then C-code came along, which is a higher level language that was soon adopted. Unfortunately, it is (in my opinion) not the best since it is not as efficient as it should be due to the way it is designed. Assembly code directly addresses all computer machine processes, of which there are many combinations. Many years ago, I designed an assembly code "typewriter" for a newly introduced microcomputer that had keys labeled with the all of the assemble code values. The intent was to make coding highly efficient. This would probably not be feasible for C-code. Unfortunately, the microcomputer company discontinued their microcomputer production line.
 
  • #55
CMalcheski said:
I have never seen nor heard of an actual app ever being ported. I'm sure it happens; I just don't see where. And I view the entire process as the ultimate in shoddy shortcuts and laziness. Just my personal opinion; we all have one.
Code reuse is desirable. I wouldn't consider it shoddy or lazy.
 
  • #56
I would argue that assembly language is *not* faster in anything other than perhaps the smallest microcontrollers.

I've got 30+ years behind the keyboard, man and boy. I've written large to very large chunks of software in assembly a dozen different CPUs 68xx, 68000, 68020, 68360, 68HC11, tms34020, 8086, 286, 386, 486,, 8052 etc.

I spent a decade programming from schematics, today I work on highly scalable and fault tolerant Java server side stuff.

The first fact of modern life is that counting cycles doesn't work with modern processors since they have pipelines. The compiler has a better view of instruction scheduling than you do, and no one can pretend that they can juggle that viewpoint on any non-trivial assembly language app. So unless you're writing for some simple-minded microcontroller, you can't claim it's faster until you run them side by side - in which case why write the assembler version in the first place? Profilers will tell you if it's needed.

The second fact of modern life is that scalability matters more than throwing raw CPU at it. So what if your Java code runs 1/4 as fast as assembly code? You write some readable code using an understandable pattern, throw some more cores at it and then you go make progress on the *purpose* you wrote the code for. Sitting around admiring all the shiny bits is not solving problems, it's a sort of engineering masturbation practiced by engineers who haven't grown up. (and I say that as a software engineer who can gold plate requirements and be more anal about code than most so I'm definitely including myself in that group)

The code is not the goal. Solving problems is the goal. The readability, scalability, maintainability of the code all matter far more than the performance, for a lot of reasons.

Biggest reason is this: You know where your code spends most of its time? No you don't. If you don't have profiler output from your application running in a production environment then all you're doing is guessing and I'd guesstimate a 95% chance that you're wrong. I've done it on many projects, even when were absolutely confident where the problem would be, we were wrong until at least the third pass through the profiling process.

Second biggest reason is that if your code is modular, based on standard patterns, and readable then you can identify the tiny percentage of your code that actually *might* be a performance bottleneck using readily available high level tools. You can't go the other direction, there aren't any tools to reverse engineer a buttload of spaghetti code hacked out in the name of "efficiency" and turn it into something readable/valuable. Also if the code is readable and based on standard patterns then you can at least understand it and consider whether you can solve the problem simply using scalability rather than resorting to the brute force/blunt axe of assembly language to cover up your design mistake or incorrect choice of algorithms.

In Agile you have the concept of "last responsible moment" You put off defining or doing work until as late as you can because the later in the process you are, the less likely you are to be affected by changes. To start out a project writing assembly code is the ultimate in "first irresponsible moment" because you've made the decision to invest maximum effort at a time when you've also got maximum uncertainty. If you get down to the last possible straw and assembly language is the only way to solve the problem then you've done your due diligence and it really has to be that way. Otherwise you're just fondling your pocket protector.

The only valid reason I know of for writing assembly code from the start is a manufacturing cost one. If you're going to make a million devices and you can save $1 on each one if you can use a cheaper microcontroller or CPU, you _might_ have an argument...though I'd still argue that the company's reputation for producing reliable products is a big implicit cost that often gets left out of that discussion.
 
  • Like
Likes ChrisVer
  • #57
What was written in this thread about assembler code versus compiled code to my believe does not touch an equally critical argument for compiled programming. Modern processor architectures have multiple caches, do preprocess instructions, does pre-execute code based on assumption of the most probable branching of the code etcetera. To write assembler code that takes those issues into consideration is realistically seen very close to impossible for someone writing code in assembler. Compilers have so called "switches" that tell the compiler if the code it is going to generate is optimized for speed, for code density or a balance between those priorities. It then generates code that looks very different depending on the setting of such switches.

I did write the whole firmware for an NEC 7220 graphic controller to be used in a terminal and did write the assembler code for the MC6809. I guess none of you readers have ever heard about the NEC 7220 graphic controller that was very popular in the early 80's! Today, as it has been correctly stated in this thread neither memory nor CPU performance are really limiting factors and so today I use the language Python very much. This is an interpreted language which tends to be much slower than compiled code. tends to be is the term is used to express that languages are made of instructions and those are provided to a high extend as elements in libraries. The Python language, a scripting language can still be very efficient as the library elements can have been coded using C or C++, so that they are already present in machine language that can be executed directly from the processor or compiler. So a scripting language like Python can be just the tool to concatenate precompiled code of its library elements!
 
  • Like
Likes ChrisVer
  • #58
IDNeon said:
There's faster compiling languages than "C", we're talking about large factors faster.

https://brenocon.com/blog/2009/09/d...t-and-most-elegant-big-data-munging-language/

MAWK is one of the fastest languages out there and that makes a HUGE difference in transforming massive amounts of data where parallel computing becomes important.
As I understand it, that is NOT a general purpose language. It is "extremely specialized for processing delimited text files in a single pass"
 
  • #59
I assume that modern assemblers have the same optimization options that higher level language compilers have.
 
  • #60
phinds said:
As I understand it, that is NOT a general purpose language. It is "extremely specialized for processing delimited text files in a single pass"
Well it does quite a bit more though, and is good for number crunching. Which is why I thought it interesting to mention here.
 
  • #61
FactChecker said:
I assume that modern assemblers have the same optimization options that higher level language compilers have.
I don't think assemblers HAVE optimization. The assumption is that you mean exactly what you write.
 
  • Like
Likes QuantumQuest and FactChecker
  • #62
phinds said:
I don't think assemblers HAVE optimization. The assumption is that you mean exactly what you write.
A wizard is never buggy. He writes precisely what he means to.
 
  • #63
As has been mentioned earlier, assembler is used very sparingly. Even in situations where there is "no choice", the assembler is often isolated. For example, most CPU's provide a selection of "atomic" operations such as "compare and exchange". But rather than code the assembly, there are functions that usually place the required assembly inline with your code. In the Microsoft development environment, this is the "InterlockedCompareExchange()" function:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms683560(v=vs.85).aspx

Still, there are cases where a particular block of code needs to be executed as fast as possible - either because a very rapid response is needed or (more likely) that block of code is executed millions of times. In such cases, the first strategy is to devise an algorithm that is fast and allow certain optimizations, like placing functions inline to the main program and flatting loops. Many compilers will provide options for doing exactly this - and those optimizations can be applied specifically to the section of code that requires it.

But if all that is not enough, sometimes assembly code can be crafted that is substantially faster than compiler-generated assembly. The computer programmer can often device methods of arranging for the code to use different types of memory access cycles most efficiently and sometimes there are special functions provided by the CPU that can be used creatively to perform unexpected functions. In some cases, the result can be a 2 or 3 fold increase in speed. In a recent case, I was able to use machine instructions intended for digital signal processing to rapidly compute the median value for an integer array. But the situation was so specialized, it would make no sense to try to get a C++ compiler to mimic the technique.
 
  • #64
phinds said:
I don't think assemblers HAVE optimization. The assumption is that you mean exactly what you write.
That is correct. Assemblers don't optimize.
 
  • #65
phinds said:
I don't think assemblers HAVE optimization. The assumption is that you mean exactly what you write.
Ok. I'll buy that.
 
  • #66
Consider that in order to craft more optimized assembly than a compiler for a particular computer architecture, you need to be an expert at that particular architecture. And if you want it to run optimally on another computer, you'll have to find an expert to rewrite it for that computer. One of the best ways to improve the performance of a code is to run it on a faster system. If you hand craft it in assembly, you lose that option.

If you must get it to run as fast as possible, you should probably write it in FORTRAN or C.
 
  • Like
Likes FactChecker
  • #67
Khashishi said:
Consider that in order to craft more optimized assembly than a compiler for a particular computer architecture, you need to be an expert at that particular architecture. And if you want it to run optimally on another computer, you'll have to find an expert to rewrite it for that computer. One of the best ways to improve the performance of a code is to run it on a faster system. If you hand craft it in assembly, you lose that option.

If you must get it to run as fast as possible, you should probably write it in FORTRAN or C.
The case I cited above was an mass-produced embedded system with heat limitations. The issue isn't "as fast as possible" but simply "fast enough to keep up with a real time data stream". Moving to a faster processor would have caused too much heat and too much electric consumption - and the additional cost would have made the product much less competitive. As processors advance, we upgrade to the new technology, become experts in the new architecture, and create improved product. Sometimes that means assembly. But the total assembly is always way less than 1% of the code.
 
  • #68
TheOldFart said:
I would argue that assembly language is *not* faster in anything other than perhaps the smallest microcontrollers.

Take a look at post #21:

Assembly language programming vs Other programming languages

These are unusual cases, but they do exist.

The 500+ line CRC optimized for speed assembly code using pclmulqdq is something that a compiler isn't going to be able to duplicate, even with an intrinsic function for the pclmulqdq instruction. Intel created a document about this:

http://www.intel.com/content/dam/ww...ation-generic-polynomials-pclmulqdq-paper.pdf

The IBM mainframe assembly code is a combination of legacy code for file access methods (assembly macros). I'm not sure if the assembly modules using processor specific instructions for speed could be replaced by Cobol assuming it's optimizer takes advantage of those instructions, but who's going to replace fairly large libraries of working assembler code?
 
  • Like
Likes QuantumQuest
  • #69
.Scott said:
Still, there are cases where a particular block of code needs to be executed as fast as possible - either because a very rapid response is needed or (more likely) that block of code is executed millions of times. In such cases, the first strategy is to devise an algorithm that is fast and allow certain optimizations, like placing functions inline to the main program and flatting loops. Many compilers will provide options for doing exactly this - and those optimizations can be applied specifically to the section of code that requires it.

But if all that is not enough, sometimes assembly code can be crafted that is substantially faster than compiler-generated assembly.
Indeed. I used to work in the MS Windows Division, and had access to the Windows code base. I remember seeing some video driver code with a few lines of inline assembly, possibly for lighting up pixels in display memory (that was about 10 or so years ago, so was somewhere in the XP or Vista timeframe).

A few years later I was working in the Office Division, and spotted some code for processing audio, that used some of the Intel SIMD instructions. I don't know if that code was actually shipped in any Office product, but I knew that someone had written it to take advantage of SIMD (single instruction, multiple data) technologies in the later Pentium and (I believe) AMD processors.

The point is that when you need to push a huge amount of data through at a high rate of speed, either for video display or speech recognition purposes, some well-crafted assembly can be very useful.
 
  • Like
Likes QuantumQuest
  • #70
@phinds - we've had to clean up some bad posts and your answer is kind of dangling now. Sorry. Remember 'dangling participles' way back when? :smile:
 

Similar threads

Replies
6
Views
1K
  • Programming and Computer Science
12
Replies
397
Views
13K
  • Programming and Computer Science
Replies
6
Views
1K
  • Programming and Computer Science
Replies
8
Views
878
  • Programming and Computer Science
Replies
29
Views
2K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
9
Views
1K
  • Programming and Computer Science
Replies
4
Views
15K
  • Programming and Computer Science
2
Replies
59
Views
5K
Back
Top