Project descriptions for ECS201B
Proposal due: Friday, February 3rd
Status Report due: Friday,
February
24th
Final Report due: Friday, March 17th
Intro/Overview
Your assignment is to pick some topic that you find interesting and do
some original research on that subject. Since you have already
had
201A, your goal should be to produce a publishable piece of work.
You can work in small groups, if you wish, or by yourself. in a
group
of 2 or 3 students. The paper should be similar in style to the
conference
papers that we will read in class. These projects will be graded on
roughly
4 different things:
-
How well the problem is defined and motivated
-
How extensive the survey of previous work is
-
The experimental technique used
-
The quality of the presentation of the results
Alternatively, you may write a survey paper of an area within computer
architecture. These papers should contain:
-
A summary of previous work in an area, including extensive references
-
A presentation of opinions of other authors both for and against
various
options (again, with references)
-
A conclusion containing your opinion of the strengths and weaknesses of
the arguments presented above
Since a survey paper is less risky than a reseach project, the survey
papers
will be expected to meet a higher standard (both of of completeness and
analysis of the literature.)
As noted above, there are three milestones associated with this
task:
The Proposal, the Status Report, and the Final Report.
Milestone 1 - The Proposal
Proposals should be 1 to 2 pages long and should include:
-
A description of the topic
-
A statement of why the topic is interesting or important
-
A description of the methods to be used for evaluating the proposed
idea
(for projects with original research)
-
References to at least 3 relevant papers you have obtained and read.
The
course text and readings cite many papers. Some other important venues
for publishing relevant work on Architecture:
-
Proceedings of the International Symposium on Computer Architecture (ISCA)
-
Proceedings of the Conference on Architectural Support for Programming
Languages and Operating Systems (ASPLOS)
-
Proceedings of the International Symposium on Microarchitecture (MICRO)
-
Proceedings of the High Performance Computer Architecture Symposium (HPCA)
-
International Journal of Parallel Processing
-
ACM Transactions on Computer Systems
-
IEEE Transactions on Computers
-
IEEE Computer Magazine
-
IEEE Micro
-
Microprocessor Report
The proposal *deadline* is given above; however, proposals turned in
earlier
than the deadline will get feedback sooner. (Remember - up to means
less
than!)
Milestone 2 - The Status Report
In order to help ensure work on the projects is moving forward in a
timely
fashion, a 1 to 2 page status report is due midway between the proposal
submission and Final Report due dates. This report should clearly
describe
the progress you are making, so that I can provide some feedback on how
you are doing and suggest any mid-course corrections that might be
advisable.
The status report will not be graded, but should be viewed as an
important
part of the project.
Milestone 3 - The Final Report
As stated above, your Final Report should be similar in style to a
conference
paper - an abstract, body, and optional appendices. The abstract should
summarize the contributions of the report in one or two paragraphs,
while
the length of the body should be limited to approximately 5000 words
(15-20
pages of double-spaced 10-point text). If you need more space, you can
put additional supporting material in appendices.
Project Talks
20-30 minute presentations of your results may replace
the
in-class final. These talks will be scheduled during finals week, with
the in-class finals time being the latest possible available time. This
should be viewed as an opportunity to practice your presentation skills
- the ability to convey your ideas and results to your peers is
critically
important in our communication age, and a central part of the research
process that should be of interest to those pursuing a Ph.D.
(editor's note - I haven't decided if we will do this for sure or
not. A lot will depend on how the class progresses during the
quarter.)
Possible Research Topics
Ideally, you should come up with your own topic, one that you find
particularly
interesting and related to your own interests. For example, if you have
an interest in compilers, then code scheduling for instruction level
parallelism
might be a good topic. If you are more interested in Operating Systems,
then the design of a processor to support the OS might be more to your
liking. To help you along a list of example projects
follows.
Keep in mind that this is not an exhaustive list of all possible
projects.
In addition, if you find one potentially interesting, I would be more
than
happy to sit down with you and discuss it in more detail.
-
Compare and contrast different approaches to exploiting instruction
level
parallelism methods - for example, decoupled vs. VLIW, vectors vs.
superscalar,
VLIW vs. superscalar, decoupled vs. superscalar, etc.
-
Study modifications to the decoupled architecture approach that might
help
provide prefetch capabilities and/or enable it to do limited
speculative
execution
-
Evaluate the maximum amount of parallelism available in a
representative
set of benchmark programs.
-
Study in detail various memory system enhancements, including victim
caches,
stream buffers, etc.
-
It has been suggested that using virtual-mapped caches may actually
improve
performance in a number of ways. How could this be?
-
Look at ways to increase the effective bandwidth between processor and
external memory.
-
Extend the current research that has been done on new ways to manage a
cache (evaluate and improve the effectiveness of C/NA, for example)
-
Write a program that takes binaries created for one architecture and
produces
binaries executable on a different architecture. You should take the
input
binary code, create program flow graphs, do a analysis of the program,
and then recompile it for the target architecture.
-
Modify an existing compiler to generate code for some other
architecture.
-
Study the "bursty" nature of pipelines; are averages really useful? Is
there a way to more accurately model bursty behaviour?
-
Analyze program basic block size, and look at the branch problem.
Evaluate
the technique of predicated execution, and give some examples of how it
can be used to increase basic block size.
-
Architectures/implementations for non-load/store architectures. For
example,
how might a stack or accumulator architecture be implemented to go
fast?
Can performance advantages be identified?
-
Look at instruction set enhancements and their effect on performance
(e.g.,
update-mode addressing, conditional register-to-register moves, and
multiply-add
instructions)
-
Analyze the static and dynamic instruction frequencies for 3-4
different
architectures. Also look at instruction couples and triples. Based on
this
information, can you propose any new instructions?
-
Examine Branch prediction methods and performance
-
Study methods for predicting multiple branches in a single cycle
-
Cache implementations, especially non-blocking caches -- design methods
and performance
-
Architectural support of operating systems (e.g., user-level traps for
lightweight threads)
-
Revisit the concept of an OS co-processor. What should such a
co-processor
look like? (what OS tasks could use specific hardware support, how
often
would it have to be used to be effective, etc.) "Design" the processor
(define the instruction set, word size, datapath, number of ALU's,
registers,
etc) What does this specially designed OS co-processor give you that a
68000 used in a similar manner wouldn't?
-
What would an OS for a machine like MISC look like?
-
Is there a way to relate branch predictor behavior to prefetching
activity?
(Think about changing working sets)
-
Can the decoupled idea be used to improve data prefetching schemes?
-
Is there a way to improve the bandwidth of a processor? (Think
redundancy
and compression)
-
What does one do with 100 million transistors, anyway?
Possible Survey Topics
Describe, compare and contrast (generally broader than the above
research
topics):
-
Is there really a "memory wall", and if so, what do we do about it?
-
Study new DRAM interfaces (synchronous DRAM, RamBus and RamLink);
consider
their performance and how they might affect processor and system
implementations.
-
What's all this noise about IRAM, anyway?
-
Speculative execution: How important is it, how is it implemented, what
kind of performance can it provide, etc.
-
Compiler transformations to improve pipeline/superscalar performance
-
Compiler transformations to improve memory behavior
-
The superscalar approach vs. the VLIW approach vs. the vector approach
vs. the decoupled approach to exploiting instruction level parallelism
-
The effect of changing technology on architecture (e.g. flash memories,
fiber optics), and the most likely technology changes in the near
future.
-
High-performance I/O (e.g. RAIDS and ATM networks)
-
The history of some aspect of computer architecture (e.g.
stored-programs,
caches, virtual memory, pipelining, microcode, protection, dataflow
machines).
More Possible Research Topics
Here
are
more possible projects, taken from Dirk Grunwald's page at
Colorado.
This is worth checking out - he has some very interesting possible
projects
listed here.