mercredi 1 juillet 2015

Not able to understang this concept of handlers in c++

I was going through a piece of code where I came across something new.However I tried to write my own code for better understanding.

#include<iostream>

using namespace std;

class material
{
public:
material()
{
    cout<<"material() called"<<endl;
}

bool test_func()
{
    cout<<"Hello World"<<endl;

    return true;
}
};

class server
{
private:
material *mat;

public:
server()
{
    cout<<"server() called"<<endl;
}
material *matrl()
{
    return mat;
}
};

class Handler
{
public:
Handler()
{
    cout<<"Handler() called"<<endl;
}

server svr;

bool demo()
{
    bool ret;
    ret=svr.matrl()->test_func();

    return ret;
}
};

int main()
{
Handler h;
cout<<"returned by demo():"<<h.demo()<<endl;

return 0;
}

Even I am getting the desired output, which is:

server() called
Handler() called
Hello World
returned by demo():1

But I am not able to understand certain concepte over here,like

material *matrl()
{
    return mat;
}

and the call

ret=svr.matrl()->test_func();

How this is working and what concept is this.Can somebody help me with this???

When using IBO/EBO, program only works when I call glBindBuffer to bind the IBO/EBO AFTER creation of the VAO

For some reason, this program only works when I bind the IBO/EBO again, after I create the VAO. I read online, and multiple SO posts, that glBindBuffer only binds the current buffer, and that it does not attach it the the VAO. I thought the glVertexAttribPointer is the function that attached the data to the VAO.

float points[] = {

   -0.5f,  0.5f, 0.0f, // top left      = 0
    0.5f,  0.5f, 0.0f, // top right     = 1
    0.5f, -0.5f, 0.0f, // bottom right  = 2
   -0.5f, -0.5f, 0.0f, // bottom left   = 3

};

GLuint elements[] = {

    0, 1, 2,
    2, 3, 0,
};

// generate vbo (point buffer)
GLuint pb = 0;
glGenBuffers(1, &pb);
glBindBuffer(GL_ARRAY_BUFFER, pb);
glBufferData(GL_ARRAY_BUFFER, sizeof(points), points, GL_STATIC_DRAW);

// generate element buffer object (ibo/ebo)
GLuint ebo = 0;
glGenBuffers(1, &ebo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW);

// generate vao
GLuint vao = 0;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);


glBindBuffer(GL_ARRAY_BUFFER, pb);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);


glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); // when I bind buffer again, it works


glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);

If I did not have the second glBindBuffer, the program crashes. All I want to know is why I have to call glBindBuffer again, after I create the VAO, when calling glBindBuffer only makes the buffer the active buffer for other functions.

Pastebin (FULL CODE)

digital signature implementation in c++

this will be my first question here...soo plzz kndly help me...am in urgent need of this!

so i have to develop a c++ code for implementation of digital signatures. i understand the basic concept of it...but am kind of a noob wen it comes to c++ coding...sopoo can someone pllzz tell me step by step procedure of how to develop a code for digitaly signing a databse and den a code for verifying that previously signed database. sooo far wat i have done is: 1. created a databse using microsoft sql server 2. have found a library only for hashin which contains hashing function(m facing a problem here . the function sha256() requires to pass the databse as an arguement i dnt know how to do dat soo plzz if u can guide me through dis also) 3. i ll be provided with a doungle containing my private key for signing of the document. preferably i ll be using rsa signing.

sooo this wat i did and know abt dis project of mine...

i request to person answering to be exclusive and xplain in detail to how to approch and code dis in c++ as i a dnt possess ant thorogh knowledge of dis topic thank you!

How to fix the armadillo library to c++

I'm using a macbook to program some bits of code here and there. Recently I wanted to do something in C++ together with the armadillo library... but after installation and everything it doesn't seem to work...

For instance I can write arma::mat variable, etc but when I run this code in TextMate:

vec q = randu(5);

cout << normalise(q);

I get this error output:

"Undefined symbols for architecture x86_64: "_wrapper_dgesdd_", referenced from: void arma::lapack::gesdd(char*, int*, int*, double*, int*, double*, double*, int*, double*, int*, double*, int*, int*, int*) in test-56d704.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) rm: /var/folders/sh/vr2n15ln47j0k33yh1j0_tyw0000gn/T/test.cpp.Sfz5vezN: No such file or directory

The weird thing is that if I don't use the normalise or norm functions it compiles well..

I include the library as #include '/usr/local/include/armadillo'

Please help!

Edit: I've installed the armadillo package both trying with "brew install armadillo" but also with the steps mentioned in the README.txt if you download armadillo from their webpage. I'm at a total loss..

C++ can't implement default constructor

I have a class:

class Fraction {
    private:
        int x;
        int y;
    public:
    // Constructors
        Fraction(long x = 0, long y = 1);
        Fraction(const Fraction&)=default;//here is the problem
        virtual ~Fraction();
 };

And I'm trying to disable default C++ constructor for copy, to do my own. So, I declared it as a default. But, when I'm trying to implement it:

Fraction::Fraction(const Fraction&){}

Compiler throws some errors at me.

./src/Fraction.cpp:16:1: error: definition of explicitly-defaulted ‘Fraction::Fraction(const Fraction&)’ Fraction::Fraction(const Fraction&){ ^ In file included from ../src/Fraction.cpp:8:0: ../src/Fraction.h:22:2: error: ‘Fraction::Fraction(const Fraction&)’ explicitly defaulted here Fraction(const Fraction&)=default;

Is there any way to fix it? What I'm doing wrong, I found some articles about defaults, but nothing that can help me to fix these errors.

QString functions giving incorrect results on CentOS

I am using C++ Qt Library and following code is working perfectly on Windows but not working on CentOS :

if(line.startsWith("[", Qt::CaseInsensitive))
                        {
                            int index = line.indexOf(']', 0, Qt::CaseInsensitive);
                            QString subLine = line.mid(index+1);
                            subLine = subLine.trimmed();
                            tokenList = subLine.split("\t");
                        }
                        else
                        {
                            tokenList = line.split("\t");
                        }

I have a line [ x.x.x.x ] something ../dir/file.extension and i want to ignore the [x.x.x.x] part while breaking line into tokens. I ma using VC9 on windows to debug and its working fine.

Double-checking understanding of memory coalescing in CUDA

Suppose I define some arrays which are visible to the GPU:

double* doubleArr = new double[fieldLen];
float* floatArr = new float[fieldLen];
char* charArr = new char[fieldLen]

Now, I have the following CUDA thread:

void thread(){
  int o = getOffset(...);
  double d = doubleArr[threadIdx.x + o];
  float f = floatArr[threadIdx.x + o];
  char c = charArr[threadIdx.x + o];
}

I'm not quite sure whether I correctly interpret the documentation, and its very critical for my design: Will the memory accesses for double, float and char be nicely coalesced? (Guess: Yes, it will fit into sizeof(type) * blockSize.x / (transaction size) transactions, plus maybe one extra transaction at the upper and lower boundary.)

Furthermore, suppose I also have a struct:

struct char3{
  char a;
  char b;
  char c;
}

char3* char3Arr = new char3[fieldLen];

I guess this will be padded and aligned to 32 bit and then consume fieldLen*4 bytes in memory and coalesce the same way as a float?